00:00:00.001 Started by upstream project "autotest-per-patch" build number 126176 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.032 The recommended git tool is: git 00:00:00.032 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.130 > git --version # 'git version 2.39.2' 00:00:00.130 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.953 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.963 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.974 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.974 > git config core.sparsecheckout # timeout=10 00:00:02.984 > git read-tree -mu HEAD # timeout=10 00:00:03.002 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.030 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.030 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.106 [Pipeline] Start of Pipeline 00:00:03.120 [Pipeline] library 00:00:03.121 Loading library shm_lib@master 00:00:03.121 Library shm_lib@master is cached. Copying from home. 00:00:03.137 [Pipeline] node 00:00:03.143 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.145 [Pipeline] { 00:00:03.157 [Pipeline] catchError 00:00:03.159 [Pipeline] { 00:00:03.173 [Pipeline] wrap 00:00:03.184 [Pipeline] { 00:00:03.193 [Pipeline] stage 00:00:03.195 [Pipeline] { (Prologue) 00:00:03.369 [Pipeline] sh 00:00:03.646 + logger -p user.info -t JENKINS-CI 00:00:03.663 [Pipeline] echo 00:00:03.664 Node: WFP16 00:00:03.670 [Pipeline] sh 00:00:03.963 [Pipeline] setCustomBuildProperty 00:00:03.972 [Pipeline] echo 00:00:03.973 Cleanup processes 00:00:03.977 [Pipeline] sh 00:00:04.258 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.258 3614099 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.271 [Pipeline] sh 00:00:04.554 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.554 ++ grep -v 'sudo pgrep' 00:00:04.554 ++ awk '{print $1}' 00:00:04.554 + sudo kill -9 00:00:04.554 + true 00:00:04.567 [Pipeline] cleanWs 00:00:04.576 [WS-CLEANUP] Deleting project workspace... 00:00:04.576 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.582 [WS-CLEANUP] done 00:00:04.587 [Pipeline] setCustomBuildProperty 00:00:04.601 [Pipeline] sh 00:00:04.879 + sudo git config --global --replace-all safe.directory '*' 00:00:04.948 [Pipeline] httpRequest 00:00:04.981 [Pipeline] echo 00:00:04.982 Sorcerer 10.211.164.101 is alive 00:00:04.987 [Pipeline] httpRequest 00:00:04.991 HttpMethod: GET 00:00:04.991 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.992 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.995 Response Code: HTTP/1.1 200 OK 00:00:04.996 Success: Status code 200 is in the accepted range: 200,404 00:00:04.996 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.784 [Pipeline] sh 00:00:06.066 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.078 [Pipeline] httpRequest 00:00:06.092 [Pipeline] echo 00:00:06.093 Sorcerer 10.211.164.101 is alive 00:00:06.100 [Pipeline] httpRequest 00:00:06.104 HttpMethod: GET 00:00:06.105 URL: http://10.211.164.101/packages/spdk_32a79de8133ad689677affde0752bde11536e17e.tar.gz 00:00:06.105 Sending request to url: http://10.211.164.101/packages/spdk_32a79de8133ad689677affde0752bde11536e17e.tar.gz 00:00:06.107 Response Code: HTTP/1.1 200 OK 00:00:06.107 Success: Status code 200 is in the accepted range: 200,404 00:00:06.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_32a79de8133ad689677affde0752bde11536e17e.tar.gz 00:00:26.526 [Pipeline] sh 00:00:26.808 + tar --no-same-owner -xf spdk_32a79de8133ad689677affde0752bde11536e17e.tar.gz 00:00:31.006 [Pipeline] sh 00:00:31.288 + git -C spdk log --oneline -n5 00:00:31.288 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:31.288 719d03c6a sock/uring: only register net impl if supported 00:00:31.288 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:31.288 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:31.288 6c7c1f57e accel: add sequence outstanding stat 00:00:31.299 [Pipeline] } 00:00:31.316 [Pipeline] // stage 00:00:31.324 [Pipeline] stage 00:00:31.326 [Pipeline] { (Prepare) 00:00:31.350 [Pipeline] writeFile 00:00:31.369 [Pipeline] sh 00:00:31.650 + logger -p user.info -t JENKINS-CI 00:00:31.660 [Pipeline] sh 00:00:31.937 + logger -p user.info -t JENKINS-CI 00:00:31.949 [Pipeline] sh 00:00:32.230 + cat autorun-spdk.conf 00:00:32.230 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.230 SPDK_TEST_NVMF=1 00:00:32.230 SPDK_TEST_NVME_CLI=1 00:00:32.230 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.230 SPDK_TEST_NVMF_NICS=e810 00:00:32.230 SPDK_TEST_VFIOUSER=1 00:00:32.230 SPDK_RUN_UBSAN=1 00:00:32.230 NET_TYPE=phy 00:00:32.237 RUN_NIGHTLY=0 00:00:32.243 [Pipeline] readFile 00:00:32.268 [Pipeline] withEnv 00:00:32.269 [Pipeline] { 00:00:32.283 [Pipeline] sh 00:00:32.565 + set -ex 00:00:32.565 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:32.565 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.565 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.565 ++ SPDK_TEST_NVMF=1 00:00:32.565 ++ SPDK_TEST_NVME_CLI=1 00:00:32.565 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.566 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.566 ++ SPDK_TEST_VFIOUSER=1 00:00:32.566 ++ SPDK_RUN_UBSAN=1 00:00:32.566 ++ NET_TYPE=phy 00:00:32.566 ++ RUN_NIGHTLY=0 00:00:32.566 + case $SPDK_TEST_NVMF_NICS in 00:00:32.566 + DRIVERS=ice 00:00:32.566 + [[ tcp == \r\d\m\a ]] 00:00:32.566 + [[ -n ice ]] 00:00:32.566 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:32.566 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:39.135 rmmod: ERROR: Module irdma is not currently loaded 00:00:39.135 rmmod: ERROR: Module i40iw is not currently loaded 00:00:39.135 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:39.135 + true 00:00:39.135 + for D in $DRIVERS 00:00:39.135 + sudo modprobe ice 00:00:39.135 + exit 0 00:00:39.143 [Pipeline] } 00:00:39.159 [Pipeline] // withEnv 00:00:39.163 [Pipeline] } 00:00:39.177 [Pipeline] // stage 00:00:39.188 [Pipeline] catchError 00:00:39.190 [Pipeline] { 00:00:39.206 [Pipeline] timeout 00:00:39.206 Timeout set to expire in 50 min 00:00:39.207 [Pipeline] { 00:00:39.221 [Pipeline] stage 00:00:39.224 [Pipeline] { (Tests) 00:00:39.241 [Pipeline] sh 00:00:39.523 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.523 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.523 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.523 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:39.523 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.523 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.523 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:39.523 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.523 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.523 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.523 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:39.523 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.523 + source /etc/os-release 00:00:39.523 ++ NAME='Fedora Linux' 00:00:39.523 ++ VERSION='38 (Cloud Edition)' 00:00:39.523 ++ ID=fedora 00:00:39.523 ++ VERSION_ID=38 00:00:39.523 ++ VERSION_CODENAME= 00:00:39.523 ++ PLATFORM_ID=platform:f38 00:00:39.523 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:39.523 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.523 ++ LOGO=fedora-logo-icon 00:00:39.523 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:39.523 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.523 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:39.524 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.524 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.524 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.524 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:39.524 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.524 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:39.524 ++ SUPPORT_END=2024-05-14 00:00:39.524 ++ VARIANT='Cloud Edition' 00:00:39.524 ++ VARIANT_ID=cloud 00:00:39.524 + uname -a 00:00:39.524 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:39.524 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:42.059 Hugepages 00:00:42.059 node hugesize free / total 00:00:42.059 node0 1048576kB 0 / 0 00:00:42.059 node0 2048kB 0 / 0 00:00:42.059 node1 1048576kB 0 / 0 00:00:42.059 node1 2048kB 0 / 0 00:00:42.059 00:00:42.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.059 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:42.059 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:42.059 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:42.059 + rm -f /tmp/spdk-ld-path 00:00:42.059 + source autorun-spdk.conf 00:00:42.059 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.059 ++ SPDK_TEST_NVMF=1 00:00:42.059 ++ SPDK_TEST_NVME_CLI=1 00:00:42.059 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.059 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.059 ++ SPDK_TEST_VFIOUSER=1 00:00:42.059 ++ SPDK_RUN_UBSAN=1 00:00:42.059 ++ NET_TYPE=phy 00:00:42.059 ++ RUN_NIGHTLY=0 00:00:42.059 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.059 + [[ -n '' ]] 00:00:42.059 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.059 + for M in /var/spdk/build-*-manifest.txt 00:00:42.059 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.059 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.059 + for M in /var/spdk/build-*-manifest.txt 00:00:42.059 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.059 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.059 ++ uname 00:00:42.059 + [[ Linux == \L\i\n\u\x ]] 00:00:42.059 + sudo dmesg -T 00:00:42.059 + sudo dmesg --clear 00:00:42.059 + dmesg_pid=3615541 00:00:42.059 + [[ Fedora Linux == FreeBSD ]] 00:00:42.059 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.059 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.059 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.059 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.059 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.059 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.059 + sudo dmesg -Tw 00:00:42.059 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.059 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.059 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.059 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.059 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.059 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.059 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.059 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.059 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.059 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.059 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.059 Test configuration: 00:00:42.059 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.059 SPDK_TEST_NVMF=1 00:00:42.059 SPDK_TEST_NVME_CLI=1 00:00:42.059 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.059 SPDK_TEST_NVMF_NICS=e810 00:00:42.059 SPDK_TEST_VFIOUSER=1 00:00:42.059 SPDK_RUN_UBSAN=1 00:00:42.059 NET_TYPE=phy 00:00:42.059 RUN_NIGHTLY=0 12:30:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:42.059 12:30:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.059 12:30:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.059 12:30:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.060 12:30:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.060 12:30:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.060 12:30:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.060 12:30:33 -- paths/export.sh@5 -- $ export PATH 00:00:42.060 12:30:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.060 12:30:33 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:42.060 12:30:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:42.060 12:30:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721039433.XXXXXX 00:00:42.060 12:30:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721039433.kzs7tk 00:00:42.060 12:30:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:42.060 12:30:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:42.060 12:30:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:42.060 12:30:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.060 12:30:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.060 12:30:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:42.060 12:30:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:42.060 12:30:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.318 12:30:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.318 12:30:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:42.318 12:30:34 -- pm/common@17 -- $ local monitor 00:00:42.318 12:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.318 12:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.318 12:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.318 12:30:34 -- pm/common@21 -- $ date +%s 00:00:42.318 12:30:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.318 12:30:34 -- pm/common@25 -- $ sleep 1 00:00:42.318 12:30:34 -- pm/common@21 -- $ date +%s 00:00:42.318 12:30:34 -- pm/common@21 -- $ date +%s 00:00:42.318 12:30:34 -- pm/common@21 -- $ date +%s 00:00:42.318 12:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039434 00:00:42.318 12:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039434 00:00:42.318 12:30:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039434 00:00:42.318 12:30:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039434 00:00:42.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039434_collect-vmstat.pm.log 00:00:42.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039434_collect-cpu-load.pm.log 00:00:42.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039434_collect-cpu-temp.pm.log 00:00:42.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039434_collect-bmc-pm.bmc.pm.log 00:00:43.255 12:30:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:43.255 12:30:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.255 12:30:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.255 12:30:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.255 12:30:35 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.255 Mon Jul 15 10:30:35 AM UTC 2024 00:00:43.255 12:30:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.255 v24.09-pre-203-g32a79de81 00:00:43.255 12:30:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.255 12:30:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.255 12:30:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.255 12:30:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:43.255 12:30:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.255 12:30:35 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.255 ************************************ 00:00:43.255 START TEST ubsan 00:00:43.255 ************************************ 00:00:43.255 12:30:35 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:43.255 using ubsan 00:00:43.255 00:00:43.255 real 0m0.000s 00:00:43.255 user 0m0.000s 00:00:43.255 sys 0m0.000s 00:00:43.255 12:30:35 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:43.255 12:30:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.255 ************************************ 00:00:43.255 END TEST ubsan 00:00:43.255 ************************************ 00:00:43.255 12:30:35 -- common/autotest_common.sh@1142 -- $ return 0 00:00:43.255 12:30:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.255 12:30:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.255 12:30:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.255 12:30:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.514 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.514 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:43.772 Using 'verbs' RDMA provider 00:00:59.597 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:14.478 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:14.478 Creating mk/config.mk...done. 00:01:14.478 Creating mk/cc.flags.mk...done. 00:01:14.478 Type 'make' to build. 00:01:14.478 12:31:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:14.478 12:31:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.478 12:31:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.478 12:31:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.478 ************************************ 00:01:14.478 START TEST make 00:01:14.478 ************************************ 00:01:14.478 12:31:04 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:14.478 make[1]: Nothing to be done for 'all'. 00:01:14.736 The Meson build system 00:01:14.736 Version: 1.3.1 00:01:14.736 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:14.736 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.736 Build type: native build 00:01:14.736 Project name: libvfio-user 00:01:14.736 Project version: 0.0.1 00:01:14.736 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.736 C linker for the host machine: cc ld.bfd 2.39-16 00:01:14.736 Host machine cpu family: x86_64 00:01:14.736 Host machine cpu: x86_64 00:01:14.736 Run-time dependency threads found: YES 00:01:14.736 Library dl found: YES 00:01:14.736 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.736 Run-time dependency json-c found: YES 0.17 00:01:14.736 Run-time dependency cmocka found: YES 1.1.7 00:01:14.736 Program pytest-3 found: NO 00:01:14.736 Program flake8 found: NO 00:01:14.736 Program misspell-fixer found: NO 00:01:14.736 Program restructuredtext-lint found: NO 00:01:14.736 Program valgrind found: YES (/usr/bin/valgrind) 00:01:14.736 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.736 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.736 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.736 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.736 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:14.736 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:14.736 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.736 Build targets in project: 8 00:01:14.736 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:14.736 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:14.736 00:01:14.736 libvfio-user 0.0.1 00:01:14.736 00:01:14.736 User defined options 00:01:14.736 buildtype : debug 00:01:14.736 default_library: shared 00:01:14.736 libdir : /usr/local/lib 00:01:14.736 00:01:14.736 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.304 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.304 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:15.304 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:15.304 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:15.304 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:15.304 [5/37] Compiling C object samples/null.p/null.c.o 00:01:15.304 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:15.304 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:15.304 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:15.304 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:15.304 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:15.304 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:15.304 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:15.304 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:15.304 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:15.304 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:15.304 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:15.304 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:15.304 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:15.304 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:15.304 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:15.304 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:15.304 [22/37] Compiling C object samples/server.p/server.c.o 00:01:15.304 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:15.304 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:15.304 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:15.562 [26/37] Compiling C object samples/client.p/client.c.o 00:01:15.562 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:15.562 [28/37] Linking target samples/client 00:01:15.562 [29/37] Linking target test/unit_tests 00:01:15.562 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:15.562 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:15.562 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:15.821 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:15.821 [34/37] Linking target samples/lspci 00:01:15.821 [35/37] Linking target samples/server 00:01:15.821 [36/37] Linking target samples/gpio-pci-idio-16 00:01:15.821 [37/37] Linking target samples/null 00:01:15.821 INFO: autodetecting backend as ninja 00:01:15.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.821 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.080 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:16.339 ninja: no work to do. 00:01:23.031 The Meson build system 00:01:23.031 Version: 1.3.1 00:01:23.031 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:23.031 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:23.031 Build type: native build 00:01:23.031 Program cat found: YES (/usr/bin/cat) 00:01:23.031 Project name: DPDK 00:01:23.031 Project version: 24.03.0 00:01:23.032 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.032 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.032 Host machine cpu family: x86_64 00:01:23.032 Host machine cpu: x86_64 00:01:23.032 Message: ## Building in Developer Mode ## 00:01:23.032 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.032 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:23.032 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.032 Program python3 found: YES (/usr/bin/python3) 00:01:23.032 Program cat found: YES (/usr/bin/cat) 00:01:23.032 Compiler for C supports arguments -march=native: YES 00:01:23.032 Checking for size of "void *" : 8 00:01:23.032 Checking for size of "void *" : 8 (cached) 00:01:23.032 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:23.032 Library m found: YES 00:01:23.032 Library numa found: YES 00:01:23.032 Has header "numaif.h" : YES 00:01:23.032 Library fdt found: NO 00:01:23.032 Library execinfo found: NO 00:01:23.032 Has header "execinfo.h" : YES 00:01:23.032 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.032 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.032 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.032 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.032 Run-time dependency openssl found: YES 3.0.9 00:01:23.032 Run-time dependency libpcap found: YES 1.10.4 00:01:23.032 Has header "pcap.h" with dependency libpcap: YES 00:01:23.032 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.032 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.032 Compiler for C supports arguments -Wformat: YES 00:01:23.032 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.032 Compiler for C supports arguments -Wformat-security: NO 00:01:23.032 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.032 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.032 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.032 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.032 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.032 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.032 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.032 Compiler for C supports arguments -Wundef: YES 00:01:23.032 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.032 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.032 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.032 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.032 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.032 Program objdump found: YES (/usr/bin/objdump) 00:01:23.032 Compiler for C supports arguments -mavx512f: YES 00:01:23.032 Checking if "AVX512 checking" compiles: YES 00:01:23.032 Fetching value of define "__SSE4_2__" : 1 00:01:23.032 Fetching value of define "__AES__" : 1 00:01:23.032 Fetching value of define "__AVX__" : 1 00:01:23.032 Fetching value of define "__AVX2__" : 1 00:01:23.032 Fetching value of define "__AVX512BW__" : 1 00:01:23.032 Fetching value of define "__AVX512CD__" : 1 00:01:23.032 Fetching value of define "__AVX512DQ__" : 1 00:01:23.032 Fetching value of define "__AVX512F__" : 1 00:01:23.032 Fetching value of define "__AVX512VL__" : 1 00:01:23.032 Fetching value of define "__PCLMUL__" : 1 00:01:23.032 Fetching value of define "__RDRND__" : 1 00:01:23.032 Fetching value of define "__RDSEED__" : 1 00:01:23.032 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:23.032 Fetching value of define "__znver1__" : (undefined) 00:01:23.032 Fetching value of define "__znver2__" : (undefined) 00:01:23.032 Fetching value of define "__znver3__" : (undefined) 00:01:23.032 Fetching value of define "__znver4__" : (undefined) 00:01:23.032 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.032 Message: lib/log: Defining dependency "log" 00:01:23.032 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.032 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.032 Checking for function "getentropy" : NO 00:01:23.032 Message: lib/eal: Defining dependency "eal" 00:01:23.032 Message: lib/ring: Defining dependency "ring" 00:01:23.032 Message: lib/rcu: Defining dependency "rcu" 00:01:23.032 Message: lib/mempool: Defining dependency "mempool" 00:01:23.032 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.032 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.032 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.032 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:23.032 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:23.032 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:23.032 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:23.032 Compiler for C supports arguments -mpclmul: YES 00:01:23.032 Compiler for C supports arguments -maes: YES 00:01:23.032 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.032 Compiler for C supports arguments -mavx512bw: YES 00:01:23.032 Compiler for C supports arguments -mavx512dq: YES 00:01:23.032 Compiler for C supports arguments -mavx512vl: YES 00:01:23.032 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.032 Compiler for C supports arguments -mavx2: YES 00:01:23.032 Compiler for C supports arguments -mavx: YES 00:01:23.032 Message: lib/net: Defining dependency "net" 00:01:23.032 Message: lib/meter: Defining dependency "meter" 00:01:23.032 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.032 Message: lib/pci: Defining dependency "pci" 00:01:23.032 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.032 Message: lib/hash: Defining dependency "hash" 00:01:23.032 Message: lib/timer: Defining dependency "timer" 00:01:23.032 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.032 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.032 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.032 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.032 Message: lib/power: Defining dependency "power" 00:01:23.032 Message: lib/reorder: Defining dependency "reorder" 00:01:23.032 Message: lib/security: Defining dependency "security" 00:01:23.032 Has header "linux/userfaultfd.h" : YES 00:01:23.032 Has header "linux/vduse.h" : YES 00:01:23.032 Message: lib/vhost: Defining dependency "vhost" 00:01:23.032 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.032 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.032 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.032 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.032 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:23.032 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:23.032 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:23.032 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:23.032 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:23.032 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:23.032 Program doxygen found: YES (/usr/bin/doxygen) 00:01:23.032 Configuring doxy-api-html.conf using configuration 00:01:23.032 Configuring doxy-api-man.conf using configuration 00:01:23.032 Program mandb found: YES (/usr/bin/mandb) 00:01:23.032 Program sphinx-build found: NO 00:01:23.032 Configuring rte_build_config.h using configuration 00:01:23.032 Message: 00:01:23.032 ================= 00:01:23.032 Applications Enabled 00:01:23.032 ================= 00:01:23.032 00:01:23.032 apps: 00:01:23.032 00:01:23.032 00:01:23.032 Message: 00:01:23.032 ================= 00:01:23.032 Libraries Enabled 00:01:23.032 ================= 00:01:23.032 00:01:23.032 libs: 00:01:23.032 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:23.032 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:23.032 cryptodev, dmadev, power, reorder, security, vhost, 00:01:23.032 00:01:23.032 Message: 00:01:23.032 =============== 00:01:23.032 Drivers Enabled 00:01:23.032 =============== 00:01:23.032 00:01:23.032 common: 00:01:23.032 00:01:23.032 bus: 00:01:23.032 pci, vdev, 00:01:23.032 mempool: 00:01:23.032 ring, 00:01:23.032 dma: 00:01:23.032 00:01:23.032 net: 00:01:23.032 00:01:23.032 crypto: 00:01:23.032 00:01:23.032 compress: 00:01:23.032 00:01:23.032 vdpa: 00:01:23.032 00:01:23.032 00:01:23.032 Message: 00:01:23.032 ================= 00:01:23.032 Content Skipped 00:01:23.032 ================= 00:01:23.032 00:01:23.032 apps: 00:01:23.032 dumpcap: explicitly disabled via build config 00:01:23.032 graph: explicitly disabled via build config 00:01:23.032 pdump: explicitly disabled via build config 00:01:23.032 proc-info: explicitly disabled via build config 00:01:23.032 test-acl: explicitly disabled via build config 00:01:23.032 test-bbdev: explicitly disabled via build config 00:01:23.032 test-cmdline: explicitly disabled via build config 00:01:23.032 test-compress-perf: explicitly disabled via build config 00:01:23.032 test-crypto-perf: explicitly disabled via build config 00:01:23.032 test-dma-perf: explicitly disabled via build config 00:01:23.032 test-eventdev: explicitly disabled via build config 00:01:23.032 test-fib: explicitly disabled via build config 00:01:23.032 test-flow-perf: explicitly disabled via build config 00:01:23.032 test-gpudev: explicitly disabled via build config 00:01:23.032 test-mldev: explicitly disabled via build config 00:01:23.032 test-pipeline: explicitly disabled via build config 00:01:23.032 test-pmd: explicitly disabled via build config 00:01:23.032 test-regex: explicitly disabled via build config 00:01:23.032 test-sad: explicitly disabled via build config 00:01:23.032 test-security-perf: explicitly disabled via build config 00:01:23.032 00:01:23.032 libs: 00:01:23.032 argparse: explicitly disabled via build config 00:01:23.032 metrics: explicitly disabled via build config 00:01:23.032 acl: explicitly disabled via build config 00:01:23.032 bbdev: explicitly disabled via build config 00:01:23.032 bitratestats: explicitly disabled via build config 00:01:23.032 bpf: explicitly disabled via build config 00:01:23.032 cfgfile: explicitly disabled via build config 00:01:23.032 distributor: explicitly disabled via build config 00:01:23.032 efd: explicitly disabled via build config 00:01:23.032 eventdev: explicitly disabled via build config 00:01:23.032 dispatcher: explicitly disabled via build config 00:01:23.032 gpudev: explicitly disabled via build config 00:01:23.032 gro: explicitly disabled via build config 00:01:23.032 gso: explicitly disabled via build config 00:01:23.032 ip_frag: explicitly disabled via build config 00:01:23.032 jobstats: explicitly disabled via build config 00:01:23.032 latencystats: explicitly disabled via build config 00:01:23.032 lpm: explicitly disabled via build config 00:01:23.033 member: explicitly disabled via build config 00:01:23.033 pcapng: explicitly disabled via build config 00:01:23.033 rawdev: explicitly disabled via build config 00:01:23.033 regexdev: explicitly disabled via build config 00:01:23.033 mldev: explicitly disabled via build config 00:01:23.033 rib: explicitly disabled via build config 00:01:23.033 sched: explicitly disabled via build config 00:01:23.033 stack: explicitly disabled via build config 00:01:23.033 ipsec: explicitly disabled via build config 00:01:23.033 pdcp: explicitly disabled via build config 00:01:23.033 fib: explicitly disabled via build config 00:01:23.033 port: explicitly disabled via build config 00:01:23.033 pdump: explicitly disabled via build config 00:01:23.033 table: explicitly disabled via build config 00:01:23.033 pipeline: explicitly disabled via build config 00:01:23.033 graph: explicitly disabled via build config 00:01:23.033 node: explicitly disabled via build config 00:01:23.033 00:01:23.033 drivers: 00:01:23.033 common/cpt: not in enabled drivers build config 00:01:23.033 common/dpaax: not in enabled drivers build config 00:01:23.033 common/iavf: not in enabled drivers build config 00:01:23.033 common/idpf: not in enabled drivers build config 00:01:23.033 common/ionic: not in enabled drivers build config 00:01:23.033 common/mvep: not in enabled drivers build config 00:01:23.033 common/octeontx: not in enabled drivers build config 00:01:23.033 bus/auxiliary: not in enabled drivers build config 00:01:23.033 bus/cdx: not in enabled drivers build config 00:01:23.033 bus/dpaa: not in enabled drivers build config 00:01:23.033 bus/fslmc: not in enabled drivers build config 00:01:23.033 bus/ifpga: not in enabled drivers build config 00:01:23.033 bus/platform: not in enabled drivers build config 00:01:23.033 bus/uacce: not in enabled drivers build config 00:01:23.033 bus/vmbus: not in enabled drivers build config 00:01:23.033 common/cnxk: not in enabled drivers build config 00:01:23.033 common/mlx5: not in enabled drivers build config 00:01:23.033 common/nfp: not in enabled drivers build config 00:01:23.033 common/nitrox: not in enabled drivers build config 00:01:23.033 common/qat: not in enabled drivers build config 00:01:23.033 common/sfc_efx: not in enabled drivers build config 00:01:23.033 mempool/bucket: not in enabled drivers build config 00:01:23.033 mempool/cnxk: not in enabled drivers build config 00:01:23.033 mempool/dpaa: not in enabled drivers build config 00:01:23.033 mempool/dpaa2: not in enabled drivers build config 00:01:23.033 mempool/octeontx: not in enabled drivers build config 00:01:23.033 mempool/stack: not in enabled drivers build config 00:01:23.033 dma/cnxk: not in enabled drivers build config 00:01:23.033 dma/dpaa: not in enabled drivers build config 00:01:23.033 dma/dpaa2: not in enabled drivers build config 00:01:23.033 dma/hisilicon: not in enabled drivers build config 00:01:23.033 dma/idxd: not in enabled drivers build config 00:01:23.033 dma/ioat: not in enabled drivers build config 00:01:23.033 dma/skeleton: not in enabled drivers build config 00:01:23.033 net/af_packet: not in enabled drivers build config 00:01:23.033 net/af_xdp: not in enabled drivers build config 00:01:23.033 net/ark: not in enabled drivers build config 00:01:23.033 net/atlantic: not in enabled drivers build config 00:01:23.033 net/avp: not in enabled drivers build config 00:01:23.033 net/axgbe: not in enabled drivers build config 00:01:23.033 net/bnx2x: not in enabled drivers build config 00:01:23.033 net/bnxt: not in enabled drivers build config 00:01:23.033 net/bonding: not in enabled drivers build config 00:01:23.033 net/cnxk: not in enabled drivers build config 00:01:23.033 net/cpfl: not in enabled drivers build config 00:01:23.033 net/cxgbe: not in enabled drivers build config 00:01:23.033 net/dpaa: not in enabled drivers build config 00:01:23.033 net/dpaa2: not in enabled drivers build config 00:01:23.033 net/e1000: not in enabled drivers build config 00:01:23.033 net/ena: not in enabled drivers build config 00:01:23.033 net/enetc: not in enabled drivers build config 00:01:23.033 net/enetfec: not in enabled drivers build config 00:01:23.033 net/enic: not in enabled drivers build config 00:01:23.033 net/failsafe: not in enabled drivers build config 00:01:23.033 net/fm10k: not in enabled drivers build config 00:01:23.033 net/gve: not in enabled drivers build config 00:01:23.033 net/hinic: not in enabled drivers build config 00:01:23.033 net/hns3: not in enabled drivers build config 00:01:23.033 net/i40e: not in enabled drivers build config 00:01:23.033 net/iavf: not in enabled drivers build config 00:01:23.033 net/ice: not in enabled drivers build config 00:01:23.033 net/idpf: not in enabled drivers build config 00:01:23.033 net/igc: not in enabled drivers build config 00:01:23.033 net/ionic: not in enabled drivers build config 00:01:23.033 net/ipn3ke: not in enabled drivers build config 00:01:23.033 net/ixgbe: not in enabled drivers build config 00:01:23.033 net/mana: not in enabled drivers build config 00:01:23.033 net/memif: not in enabled drivers build config 00:01:23.033 net/mlx4: not in enabled drivers build config 00:01:23.033 net/mlx5: not in enabled drivers build config 00:01:23.033 net/mvneta: not in enabled drivers build config 00:01:23.033 net/mvpp2: not in enabled drivers build config 00:01:23.033 net/netvsc: not in enabled drivers build config 00:01:23.033 net/nfb: not in enabled drivers build config 00:01:23.033 net/nfp: not in enabled drivers build config 00:01:23.033 net/ngbe: not in enabled drivers build config 00:01:23.033 net/null: not in enabled drivers build config 00:01:23.033 net/octeontx: not in enabled drivers build config 00:01:23.033 net/octeon_ep: not in enabled drivers build config 00:01:23.033 net/pcap: not in enabled drivers build config 00:01:23.033 net/pfe: not in enabled drivers build config 00:01:23.033 net/qede: not in enabled drivers build config 00:01:23.033 net/ring: not in enabled drivers build config 00:01:23.033 net/sfc: not in enabled drivers build config 00:01:23.033 net/softnic: not in enabled drivers build config 00:01:23.033 net/tap: not in enabled drivers build config 00:01:23.033 net/thunderx: not in enabled drivers build config 00:01:23.033 net/txgbe: not in enabled drivers build config 00:01:23.033 net/vdev_netvsc: not in enabled drivers build config 00:01:23.033 net/vhost: not in enabled drivers build config 00:01:23.033 net/virtio: not in enabled drivers build config 00:01:23.033 net/vmxnet3: not in enabled drivers build config 00:01:23.033 raw/*: missing internal dependency, "rawdev" 00:01:23.033 crypto/armv8: not in enabled drivers build config 00:01:23.033 crypto/bcmfs: not in enabled drivers build config 00:01:23.033 crypto/caam_jr: not in enabled drivers build config 00:01:23.033 crypto/ccp: not in enabled drivers build config 00:01:23.033 crypto/cnxk: not in enabled drivers build config 00:01:23.033 crypto/dpaa_sec: not in enabled drivers build config 00:01:23.033 crypto/dpaa2_sec: not in enabled drivers build config 00:01:23.033 crypto/ipsec_mb: not in enabled drivers build config 00:01:23.033 crypto/mlx5: not in enabled drivers build config 00:01:23.033 crypto/mvsam: not in enabled drivers build config 00:01:23.033 crypto/nitrox: not in enabled drivers build config 00:01:23.033 crypto/null: not in enabled drivers build config 00:01:23.033 crypto/octeontx: not in enabled drivers build config 00:01:23.033 crypto/openssl: not in enabled drivers build config 00:01:23.033 crypto/scheduler: not in enabled drivers build config 00:01:23.033 crypto/uadk: not in enabled drivers build config 00:01:23.033 crypto/virtio: not in enabled drivers build config 00:01:23.033 compress/isal: not in enabled drivers build config 00:01:23.033 compress/mlx5: not in enabled drivers build config 00:01:23.033 compress/nitrox: not in enabled drivers build config 00:01:23.033 compress/octeontx: not in enabled drivers build config 00:01:23.033 compress/zlib: not in enabled drivers build config 00:01:23.033 regex/*: missing internal dependency, "regexdev" 00:01:23.033 ml/*: missing internal dependency, "mldev" 00:01:23.033 vdpa/ifc: not in enabled drivers build config 00:01:23.033 vdpa/mlx5: not in enabled drivers build config 00:01:23.033 vdpa/nfp: not in enabled drivers build config 00:01:23.033 vdpa/sfc: not in enabled drivers build config 00:01:23.033 event/*: missing internal dependency, "eventdev" 00:01:23.033 baseband/*: missing internal dependency, "bbdev" 00:01:23.033 gpu/*: missing internal dependency, "gpudev" 00:01:23.033 00:01:23.033 00:01:23.033 Build targets in project: 85 00:01:23.033 00:01:23.033 DPDK 24.03.0 00:01:23.033 00:01:23.033 User defined options 00:01:23.033 buildtype : debug 00:01:23.033 default_library : shared 00:01:23.033 libdir : lib 00:01:23.033 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:23.033 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:23.033 c_link_args : 00:01:23.033 cpu_instruction_set: native 00:01:23.033 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:23.033 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:23.033 enable_docs : false 00:01:23.033 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:23.033 enable_kmods : false 00:01:23.033 max_lcores : 128 00:01:23.033 tests : false 00:01:23.033 00:01:23.033 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.033 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:23.033 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:23.033 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:23.033 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:23.033 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:23.033 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:23.033 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:23.033 [7/268] Linking static target lib/librte_kvargs.a 00:01:23.033 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:23.033 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:23.033 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:23.033 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:23.033 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:23.033 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:23.033 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:23.033 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:23.033 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:23.033 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:23.033 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:23.033 [19/268] Linking static target lib/librte_log.a 00:01:23.033 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:23.292 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:23.292 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:23.292 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:23.292 [24/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:23.292 [25/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:23.292 [26/268] Linking static target lib/librte_pci.a 00:01:23.292 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:23.292 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:23.292 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:23.292 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:23.292 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:23.292 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:23.292 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:23.550 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:23.550 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:23.550 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.550 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.550 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.550 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.550 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.550 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.550 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.550 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.550 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.550 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.550 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.550 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.550 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.550 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.550 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.550 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.550 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.550 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.550 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.550 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.550 [56/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:23.550 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.550 [58/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:23.550 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.550 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.550 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.550 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.550 [63/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:23.550 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.550 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.550 [66/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:23.550 [67/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.550 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:23.550 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:23.550 [70/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:23.550 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.550 [72/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:23.550 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:23.550 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:23.550 [75/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:23.550 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.550 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:23.550 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.550 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.550 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:23.550 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.550 [82/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.550 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.809 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.809 [85/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:23.809 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.809 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:23.809 [88/268] Linking static target lib/librte_ring.a 00:01:23.809 [89/268] Linking static target lib/librte_meter.a 00:01:23.809 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.809 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:23.809 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.809 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:23.809 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.809 [95/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:23.809 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.809 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.809 [98/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.809 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:23.810 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:23.810 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.810 [102/268] Linking static target lib/librte_telemetry.a 00:01:23.810 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:23.810 [104/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:23.810 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:23.810 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:23.810 [107/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:23.810 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:23.810 [109/268] Linking static target lib/librte_cmdline.a 00:01:23.810 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:23.810 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:23.810 [112/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:23.810 [113/268] Linking static target lib/librte_timer.a 00:01:23.810 [114/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:23.810 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:23.810 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:23.810 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:23.810 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.810 [119/268] Linking static target lib/librte_net.a 00:01:23.810 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:23.810 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.810 [122/268] Linking static target lib/librte_mempool.a 00:01:23.810 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:23.810 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:23.810 [125/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.810 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:23.810 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:23.810 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:23.810 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:23.810 [130/268] Linking static target lib/librte_rcu.a 00:01:23.810 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.810 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:23.810 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:23.810 [134/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:23.810 [135/268] Linking static target lib/librte_compressdev.a 00:01:23.810 [136/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:23.810 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:23.810 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:23.810 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:23.810 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.810 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:23.810 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:23.810 [143/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.069 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:24.069 [145/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.069 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:24.069 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:24.069 [148/268] Linking target lib/librte_log.so.24.1 00:01:24.069 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:24.069 [150/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.069 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:24.069 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:24.069 [153/268] Linking static target lib/librte_dmadev.a 00:01:24.069 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:24.069 [155/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.069 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:24.069 [157/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.069 [158/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.069 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:24.069 [160/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:24.069 [161/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:24.069 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:24.069 [163/268] Linking static target lib/librte_mbuf.a 00:01:24.069 [164/268] Linking static target lib/librte_eal.a 00:01:24.069 [165/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:24.069 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:24.069 [167/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.069 [168/268] Linking static target lib/librte_reorder.a 00:01:24.069 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:24.069 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:24.069 [171/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:24.069 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:24.069 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:24.069 [174/268] Linking target lib/librte_kvargs.so.24.1 00:01:24.069 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:24.069 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:24.328 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:24.328 [178/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.328 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:24.328 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:24.328 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:24.328 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:24.328 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:24.328 [184/268] Linking static target lib/librte_security.a 00:01:24.328 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:24.328 [186/268] Linking static target lib/librte_hash.a 00:01:24.328 [187/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.328 [188/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.328 [189/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:24.328 [190/268] Linking target lib/librte_telemetry.so.24.1 00:01:24.328 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:24.328 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:24.328 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:24.328 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:24.328 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:24.328 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:24.328 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.328 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.328 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:24.328 [200/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:24.328 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:24.587 [202/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:24.587 [203/268] Linking static target lib/librte_power.a 00:01:24.587 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:24.587 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:24.587 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:24.587 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:24.587 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.587 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:24.587 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.587 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.587 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.587 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:24.587 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.587 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.844 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.844 [217/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:24.844 [218/268] Linking static target lib/librte_cryptodev.a 00:01:24.844 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.844 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.844 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.101 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:25.101 [223/268] Linking static target lib/librte_ethdev.a 00:01:25.101 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.359 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.359 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:25.359 [227/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.735 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.735 [229/268] Linking static target lib/librte_vhost.a 00:01:26.735 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.640 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.919 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.486 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.486 [234/268] Linking target lib/librte_eal.so.24.1 00:01:34.486 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:34.761 [236/268] Linking target lib/librte_ring.so.24.1 00:01:34.761 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:34.761 [238/268] Linking target lib/librte_pci.so.24.1 00:01:34.761 [239/268] Linking target lib/librte_meter.so.24.1 00:01:34.761 [240/268] Linking target lib/librte_timer.so.24.1 00:01:34.761 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:34.761 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:34.761 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:34.761 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:34.761 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:34.761 [246/268] Linking target lib/librte_rcu.so.24.1 00:01:34.761 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:34.761 [248/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:35.020 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:35.020 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:35.020 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:35.020 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:35.020 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:35.279 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:35.279 [255/268] Linking target lib/librte_net.so.24.1 00:01:35.279 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:35.279 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:35.279 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:35.537 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:35.537 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:35.537 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:35.537 [262/268] Linking target lib/librte_hash.so.24.1 00:01:35.537 [263/268] Linking target lib/librte_security.so.24.1 00:01:35.537 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:35.537 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:35.796 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:35.796 [267/268] Linking target lib/librte_power.so.24.1 00:01:35.796 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:35.796 INFO: autodetecting backend as ninja 00:01:35.796 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:37.171 CC lib/ut_mock/mock.o 00:01:37.171 CC lib/ut/ut.o 00:01:37.171 CC lib/log/log.o 00:01:37.171 CC lib/log/log_flags.o 00:01:37.171 CC lib/log/log_deprecated.o 00:01:37.171 LIB libspdk_ut.a 00:01:37.172 LIB libspdk_ut_mock.a 00:01:37.172 LIB libspdk_log.a 00:01:37.172 SO libspdk_ut.so.2.0 00:01:37.172 SO libspdk_ut_mock.so.6.0 00:01:37.172 SO libspdk_log.so.7.0 00:01:37.430 SYMLINK libspdk_ut.so 00:01:37.430 SYMLINK libspdk_ut_mock.so 00:01:37.430 SYMLINK libspdk_log.so 00:01:37.687 CC lib/util/base64.o 00:01:37.688 CXX lib/trace_parser/trace.o 00:01:37.688 CC lib/util/bit_array.o 00:01:37.688 CC lib/util/cpuset.o 00:01:37.688 CC lib/util/crc32.o 00:01:37.688 CC lib/util/crc16.o 00:01:37.688 CC lib/util/crc32c.o 00:01:37.688 CC lib/dma/dma.o 00:01:37.688 CC lib/ioat/ioat.o 00:01:37.688 CC lib/util/crc32_ieee.o 00:01:37.688 CC lib/util/crc64.o 00:01:37.688 CC lib/util/dif.o 00:01:37.688 CC lib/util/fd.o 00:01:37.688 CC lib/util/file.o 00:01:37.688 CC lib/util/hexlify.o 00:01:37.688 CC lib/util/iov.o 00:01:37.688 CC lib/util/math.o 00:01:37.688 CC lib/util/pipe.o 00:01:37.688 CC lib/util/strerror_tls.o 00:01:37.688 CC lib/util/string.o 00:01:37.688 CC lib/util/uuid.o 00:01:37.688 CC lib/util/fd_group.o 00:01:37.688 CC lib/util/xor.o 00:01:37.688 CC lib/util/zipf.o 00:01:37.946 CC lib/vfio_user/host/vfio_user_pci.o 00:01:37.946 CC lib/vfio_user/host/vfio_user.o 00:01:37.946 LIB libspdk_dma.a 00:01:37.946 SO libspdk_dma.so.4.0 00:01:37.946 LIB libspdk_ioat.a 00:01:37.946 SYMLINK libspdk_dma.so 00:01:38.204 SO libspdk_ioat.so.7.0 00:01:38.204 SYMLINK libspdk_ioat.so 00:01:38.204 LIB libspdk_util.a 00:01:38.463 SO libspdk_util.so.9.1 00:01:38.463 LIB libspdk_vfio_user.a 00:01:38.463 SO libspdk_vfio_user.so.5.0 00:01:38.463 SYMLINK libspdk_vfio_user.so 00:01:38.463 SYMLINK libspdk_util.so 00:01:38.722 LIB libspdk_trace_parser.a 00:01:38.722 SO libspdk_trace_parser.so.5.0 00:01:38.722 SYMLINK libspdk_trace_parser.so 00:01:38.722 CC lib/rdma_utils/rdma_utils.o 00:01:38.722 CC lib/json/json_parse.o 00:01:38.722 CC lib/json/json_write.o 00:01:38.722 CC lib/json/json_util.o 00:01:38.722 CC lib/rdma_provider/common.o 00:01:38.722 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:38.722 CC lib/conf/conf.o 00:01:38.722 CC lib/env_dpdk/env.o 00:01:38.722 CC lib/idxd/idxd.o 00:01:38.722 CC lib/env_dpdk/memory.o 00:01:38.722 CC lib/idxd/idxd_user.o 00:01:38.722 CC lib/env_dpdk/pci.o 00:01:38.722 CC lib/idxd/idxd_kernel.o 00:01:38.722 CC lib/env_dpdk/init.o 00:01:38.722 CC lib/env_dpdk/threads.o 00:01:38.722 CC lib/env_dpdk/pci_ioat.o 00:01:38.722 CC lib/env_dpdk/pci_virtio.o 00:01:38.722 CC lib/vmd/vmd.o 00:01:38.722 CC lib/env_dpdk/pci_vmd.o 00:01:38.722 CC lib/vmd/led.o 00:01:38.722 CC lib/env_dpdk/pci_idxd.o 00:01:38.722 CC lib/env_dpdk/pci_event.o 00:01:38.722 CC lib/env_dpdk/sigbus_handler.o 00:01:38.722 CC lib/env_dpdk/pci_dpdk.o 00:01:38.722 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.722 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:38.979 LIB libspdk_rdma_provider.a 00:01:39.237 SO libspdk_rdma_provider.so.6.0 00:01:39.237 LIB libspdk_conf.a 00:01:39.237 LIB libspdk_rdma_utils.a 00:01:39.237 SO libspdk_rdma_utils.so.1.0 00:01:39.237 SO libspdk_conf.so.6.0 00:01:39.237 LIB libspdk_json.a 00:01:39.237 SYMLINK libspdk_rdma_provider.so 00:01:39.237 SO libspdk_json.so.6.0 00:01:39.237 SYMLINK libspdk_conf.so 00:01:39.237 SYMLINK libspdk_rdma_utils.so 00:01:39.237 SYMLINK libspdk_json.so 00:01:39.495 LIB libspdk_idxd.a 00:01:39.495 SO libspdk_idxd.so.12.0 00:01:39.495 LIB libspdk_vmd.a 00:01:39.495 SYMLINK libspdk_idxd.so 00:01:39.495 SO libspdk_vmd.so.6.0 00:01:39.495 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.495 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.495 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.495 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:39.495 SYMLINK libspdk_vmd.so 00:01:40.062 LIB libspdk_jsonrpc.a 00:01:40.062 SO libspdk_jsonrpc.so.6.0 00:01:40.062 SYMLINK libspdk_jsonrpc.so 00:01:40.321 LIB libspdk_env_dpdk.a 00:01:40.580 CC lib/rpc/rpc.o 00:01:40.580 SO libspdk_env_dpdk.so.14.1 00:01:40.580 SYMLINK libspdk_env_dpdk.so 00:01:40.580 LIB libspdk_rpc.a 00:01:40.839 SO libspdk_rpc.so.6.0 00:01:40.839 SYMLINK libspdk_rpc.so 00:01:41.097 CC lib/notify/notify.o 00:01:41.097 CC lib/notify/notify_rpc.o 00:01:41.097 CC lib/trace/trace.o 00:01:41.097 CC lib/trace/trace_flags.o 00:01:41.097 CC lib/trace/trace_rpc.o 00:01:41.097 CC lib/keyring/keyring.o 00:01:41.097 CC lib/keyring/keyring_rpc.o 00:01:41.356 LIB libspdk_notify.a 00:01:41.356 SO libspdk_notify.so.6.0 00:01:41.356 LIB libspdk_trace.a 00:01:41.356 SYMLINK libspdk_notify.so 00:01:41.356 LIB libspdk_keyring.a 00:01:41.356 SO libspdk_trace.so.10.0 00:01:41.356 SO libspdk_keyring.so.1.0 00:01:41.615 SYMLINK libspdk_trace.so 00:01:41.615 SYMLINK libspdk_keyring.so 00:01:41.872 CC lib/thread/thread.o 00:01:41.872 CC lib/thread/iobuf.o 00:01:41.872 CC lib/sock/sock.o 00:01:41.872 CC lib/sock/sock_rpc.o 00:01:42.131 LIB libspdk_sock.a 00:01:42.390 SO libspdk_sock.so.10.0 00:01:42.390 SYMLINK libspdk_sock.so 00:01:42.649 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:42.649 CC lib/nvme/nvme_ctrlr.o 00:01:42.649 CC lib/nvme/nvme_fabric.o 00:01:42.649 CC lib/nvme/nvme_ns_cmd.o 00:01:42.649 CC lib/nvme/nvme_ns.o 00:01:42.649 CC lib/nvme/nvme_pcie_common.o 00:01:42.649 CC lib/nvme/nvme_pcie.o 00:01:42.649 CC lib/nvme/nvme_qpair.o 00:01:42.649 CC lib/nvme/nvme.o 00:01:42.649 CC lib/nvme/nvme_quirks.o 00:01:42.649 CC lib/nvme/nvme_transport.o 00:01:42.649 CC lib/nvme/nvme_discovery.o 00:01:42.649 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:42.649 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:42.649 CC lib/nvme/nvme_tcp.o 00:01:42.649 CC lib/nvme/nvme_opal.o 00:01:42.649 CC lib/nvme/nvme_io_msg.o 00:01:42.649 CC lib/nvme/nvme_poll_group.o 00:01:42.649 CC lib/nvme/nvme_stubs.o 00:01:42.649 CC lib/nvme/nvme_zns.o 00:01:42.649 CC lib/nvme/nvme_auth.o 00:01:42.649 CC lib/nvme/nvme_cuse.o 00:01:42.649 CC lib/nvme/nvme_vfio_user.o 00:01:42.649 CC lib/nvme/nvme_rdma.o 00:01:43.215 LIB libspdk_thread.a 00:01:43.473 SO libspdk_thread.so.10.1 00:01:43.473 SYMLINK libspdk_thread.so 00:01:43.732 CC lib/accel/accel.o 00:01:43.732 CC lib/init/json_config.o 00:01:43.732 CC lib/virtio/virtio.o 00:01:43.732 CC lib/accel/accel_rpc.o 00:01:43.732 CC lib/init/subsystem_rpc.o 00:01:43.732 CC lib/init/subsystem.o 00:01:43.732 CC lib/virtio/virtio_vhost_user.o 00:01:43.732 CC lib/init/rpc.o 00:01:43.732 CC lib/accel/accel_sw.o 00:01:43.732 CC lib/virtio/virtio_vfio_user.o 00:01:43.732 CC lib/virtio/virtio_pci.o 00:01:43.732 CC lib/vfu_tgt/tgt_endpoint.o 00:01:43.732 CC lib/vfu_tgt/tgt_rpc.o 00:01:43.732 CC lib/blob/blobstore.o 00:01:43.732 CC lib/blob/request.o 00:01:43.732 CC lib/blob/zeroes.o 00:01:43.732 CC lib/blob/blob_bs_dev.o 00:01:43.990 LIB libspdk_init.a 00:01:43.990 SO libspdk_init.so.5.0 00:01:43.990 LIB libspdk_virtio.a 00:01:43.990 SYMLINK libspdk_init.so 00:01:43.990 LIB libspdk_vfu_tgt.a 00:01:44.248 SO libspdk_virtio.so.7.0 00:01:44.248 SO libspdk_vfu_tgt.so.3.0 00:01:44.248 SYMLINK libspdk_vfu_tgt.so 00:01:44.248 SYMLINK libspdk_virtio.so 00:01:44.505 CC lib/event/app.o 00:01:44.505 CC lib/event/reactor.o 00:01:44.505 CC lib/event/log_rpc.o 00:01:44.505 CC lib/event/app_rpc.o 00:01:44.505 CC lib/event/scheduler_static.o 00:01:44.763 LIB libspdk_accel.a 00:01:44.763 SO libspdk_accel.so.15.1 00:01:44.763 LIB libspdk_event.a 00:01:44.763 SYMLINK libspdk_accel.so 00:01:44.763 SO libspdk_event.so.14.0 00:01:45.022 SYMLINK libspdk_event.so 00:01:45.022 LIB libspdk_nvme.a 00:01:45.280 CC lib/bdev/bdev.o 00:01:45.280 CC lib/bdev/bdev_rpc.o 00:01:45.280 CC lib/bdev/bdev_zone.o 00:01:45.280 CC lib/bdev/part.o 00:01:45.280 CC lib/bdev/scsi_nvme.o 00:01:45.280 SO libspdk_nvme.so.13.1 00:01:45.539 SYMLINK libspdk_nvme.so 00:01:46.916 LIB libspdk_blob.a 00:01:46.916 SO libspdk_blob.so.11.0 00:01:46.916 LIB libspdk_bdev.a 00:01:46.916 SYMLINK libspdk_blob.so 00:01:46.916 SO libspdk_bdev.so.15.1 00:01:46.916 SYMLINK libspdk_bdev.so 00:01:47.174 CC lib/lvol/lvol.o 00:01:47.174 CC lib/blobfs/blobfs.o 00:01:47.174 CC lib/blobfs/tree.o 00:01:47.434 CC lib/nvmf/ctrlr.o 00:01:47.434 CC lib/nvmf/ctrlr_discovery.o 00:01:47.434 CC lib/ftl/ftl_core.o 00:01:47.434 CC lib/nvmf/ctrlr_bdev.o 00:01:47.434 CC lib/nvmf/subsystem.o 00:01:47.434 CC lib/ftl/ftl_init.o 00:01:47.434 CC lib/ublk/ublk_rpc.o 00:01:47.434 CC lib/ftl/ftl_layout.o 00:01:47.434 CC lib/ublk/ublk.o 00:01:47.434 CC lib/nvmf/nvmf.o 00:01:47.434 CC lib/nvmf/nvmf_rpc.o 00:01:47.434 CC lib/scsi/dev.o 00:01:47.434 CC lib/ftl/ftl_debug.o 00:01:47.434 CC lib/nbd/nbd.o 00:01:47.434 CC lib/ftl/ftl_io.o 00:01:47.434 CC lib/ftl/ftl_sb.o 00:01:47.434 CC lib/nvmf/transport.o 00:01:47.434 CC lib/scsi/lun.o 00:01:47.434 CC lib/nbd/nbd_rpc.o 00:01:47.434 CC lib/nvmf/tcp.o 00:01:47.434 CC lib/scsi/port.o 00:01:47.434 CC lib/ftl/ftl_l2p.o 00:01:47.434 CC lib/scsi/scsi.o 00:01:47.434 CC lib/nvmf/stubs.o 00:01:47.434 CC lib/scsi/scsi_bdev.o 00:01:47.434 CC lib/ftl/ftl_l2p_flat.o 00:01:47.434 CC lib/nvmf/mdns_server.o 00:01:47.434 CC lib/scsi/scsi_pr.o 00:01:47.434 CC lib/nvmf/vfio_user.o 00:01:47.434 CC lib/scsi/scsi_rpc.o 00:01:47.434 CC lib/ftl/ftl_nv_cache.o 00:01:47.434 CC lib/ftl/ftl_band.o 00:01:47.434 CC lib/scsi/task.o 00:01:47.434 CC lib/ftl/ftl_band_ops.o 00:01:47.434 CC lib/nvmf/rdma.o 00:01:47.434 CC lib/nvmf/auth.o 00:01:47.434 CC lib/ftl/ftl_writer.o 00:01:47.434 CC lib/ftl/ftl_rq.o 00:01:47.434 CC lib/ftl/ftl_l2p_cache.o 00:01:47.434 CC lib/ftl/ftl_reloc.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt.o 00:01:47.434 CC lib/ftl/ftl_p2l.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:47.434 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:47.434 CC lib/ftl/utils/ftl_conf.o 00:01:47.434 CC lib/ftl/utils/ftl_md.o 00:01:47.434 CC lib/ftl/utils/ftl_mempool.o 00:01:47.434 CC lib/ftl/utils/ftl_property.o 00:01:47.434 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:47.434 CC lib/ftl/utils/ftl_bitmap.o 00:01:47.434 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:47.434 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:47.434 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:47.434 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:47.434 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:47.434 CC lib/ftl/base/ftl_base_dev.o 00:01:47.434 CC lib/ftl/base/ftl_base_bdev.o 00:01:47.434 CC lib/ftl/ftl_trace.o 00:01:47.999 LIB libspdk_scsi.a 00:01:48.256 SO libspdk_scsi.so.9.0 00:01:48.256 SYMLINK libspdk_scsi.so 00:01:48.256 LIB libspdk_nbd.a 00:01:48.256 LIB libspdk_blobfs.a 00:01:48.256 SO libspdk_nbd.so.7.0 00:01:48.256 SO libspdk_blobfs.so.10.0 00:01:48.256 LIB libspdk_ublk.a 00:01:48.513 SO libspdk_ublk.so.3.0 00:01:48.513 SYMLINK libspdk_nbd.so 00:01:48.513 LIB libspdk_lvol.a 00:01:48.513 SYMLINK libspdk_blobfs.so 00:01:48.513 SO libspdk_lvol.so.10.0 00:01:48.513 SYMLINK libspdk_ublk.so 00:01:48.513 LIB libspdk_ftl.a 00:01:48.513 SYMLINK libspdk_lvol.so 00:01:48.513 CC lib/iscsi/conn.o 00:01:48.513 CC lib/iscsi/iscsi.o 00:01:48.513 CC lib/iscsi/init_grp.o 00:01:48.513 CC lib/iscsi/md5.o 00:01:48.513 CC lib/iscsi/param.o 00:01:48.513 CC lib/vhost/vhost.o 00:01:48.513 CC lib/iscsi/portal_grp.o 00:01:48.513 CC lib/iscsi/tgt_node.o 00:01:48.513 CC lib/vhost/vhost_rpc.o 00:01:48.513 CC lib/iscsi/iscsi_subsystem.o 00:01:48.513 CC lib/iscsi/iscsi_rpc.o 00:01:48.513 CC lib/vhost/vhost_scsi.o 00:01:48.513 CC lib/vhost/vhost_blk.o 00:01:48.513 CC lib/iscsi/task.o 00:01:48.513 CC lib/vhost/rte_vhost_user.o 00:01:48.771 SO libspdk_ftl.so.9.0 00:01:49.028 SYMLINK libspdk_ftl.so 00:01:49.594 LIB libspdk_vhost.a 00:01:49.852 LIB libspdk_nvmf.a 00:01:49.852 SO libspdk_vhost.so.8.0 00:01:49.852 SO libspdk_nvmf.so.18.1 00:01:49.852 SYMLINK libspdk_vhost.so 00:01:49.852 LIB libspdk_iscsi.a 00:01:50.110 SO libspdk_iscsi.so.8.0 00:01:50.110 SYMLINK libspdk_nvmf.so 00:01:50.110 SYMLINK libspdk_iscsi.so 00:01:50.674 CC module/vfu_device/vfu_virtio.o 00:01:50.674 CC module/vfu_device/vfu_virtio_blk.o 00:01:50.674 CC module/vfu_device/vfu_virtio_scsi.o 00:01:50.674 CC module/vfu_device/vfu_virtio_rpc.o 00:01:50.674 CC module/env_dpdk/env_dpdk_rpc.o 00:01:50.954 CC module/accel/error/accel_error.o 00:01:50.954 CC module/accel/error/accel_error_rpc.o 00:01:50.954 CC module/blob/bdev/blob_bdev.o 00:01:50.954 CC module/accel/ioat/accel_ioat.o 00:01:50.954 CC module/accel/ioat/accel_ioat_rpc.o 00:01:50.954 CC module/sock/posix/posix.o 00:01:50.954 CC module/accel/iaa/accel_iaa.o 00:01:50.954 CC module/accel/iaa/accel_iaa_rpc.o 00:01:50.954 LIB libspdk_env_dpdk_rpc.a 00:01:50.954 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:50.954 CC module/accel/dsa/accel_dsa.o 00:01:50.954 CC module/accel/dsa/accel_dsa_rpc.o 00:01:50.954 CC module/keyring/linux/keyring.o 00:01:50.954 CC module/keyring/linux/keyring_rpc.o 00:01:50.954 CC module/scheduler/gscheduler/gscheduler.o 00:01:50.954 CC module/keyring/file/keyring.o 00:01:50.954 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:50.954 CC module/keyring/file/keyring_rpc.o 00:01:50.954 SO libspdk_env_dpdk_rpc.so.6.0 00:01:50.954 SYMLINK libspdk_env_dpdk_rpc.so 00:01:50.954 LIB libspdk_keyring_linux.a 00:01:50.954 LIB libspdk_accel_error.a 00:01:50.954 LIB libspdk_scheduler_dpdk_governor.a 00:01:50.954 SO libspdk_accel_error.so.2.0 00:01:50.954 SO libspdk_keyring_linux.so.1.0 00:01:51.211 LIB libspdk_accel_ioat.a 00:01:51.211 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:51.211 LIB libspdk_scheduler_dynamic.a 00:01:51.211 LIB libspdk_accel_iaa.a 00:01:51.211 SO libspdk_accel_ioat.so.6.0 00:01:51.211 SO libspdk_scheduler_dynamic.so.4.0 00:01:51.211 SYMLINK libspdk_accel_error.so 00:01:51.211 SYMLINK libspdk_keyring_linux.so 00:01:51.211 SO libspdk_accel_iaa.so.3.0 00:01:51.211 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:51.211 LIB libspdk_accel_dsa.a 00:01:51.211 LIB libspdk_blob_bdev.a 00:01:51.211 SYMLINK libspdk_accel_ioat.so 00:01:51.211 LIB libspdk_keyring_file.a 00:01:51.211 SO libspdk_accel_dsa.so.5.0 00:01:51.211 SYMLINK libspdk_scheduler_dynamic.so 00:01:51.211 SO libspdk_blob_bdev.so.11.0 00:01:51.211 LIB libspdk_scheduler_gscheduler.a 00:01:51.211 SYMLINK libspdk_accel_iaa.so 00:01:51.211 SO libspdk_keyring_file.so.1.0 00:01:51.211 SO libspdk_scheduler_gscheduler.so.4.0 00:01:51.211 SYMLINK libspdk_blob_bdev.so 00:01:51.211 SYMLINK libspdk_accel_dsa.so 00:01:51.211 SYMLINK libspdk_scheduler_gscheduler.so 00:01:51.211 SYMLINK libspdk_keyring_file.so 00:01:51.467 LIB libspdk_vfu_device.a 00:01:51.467 SO libspdk_vfu_device.so.3.0 00:01:51.467 SYMLINK libspdk_vfu_device.so 00:01:51.724 LIB libspdk_sock_posix.a 00:01:51.724 SO libspdk_sock_posix.so.6.0 00:01:51.724 CC module/bdev/delay/vbdev_delay.o 00:01:51.724 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:51.724 CC module/bdev/gpt/gpt.o 00:01:51.724 CC module/bdev/gpt/vbdev_gpt.o 00:01:51.724 CC module/bdev/null/bdev_null.o 00:01:51.724 CC module/bdev/null/bdev_null_rpc.o 00:01:51.724 CC module/bdev/error/vbdev_error.o 00:01:51.724 CC module/bdev/lvol/vbdev_lvol.o 00:01:51.724 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:51.724 CC module/bdev/error/vbdev_error_rpc.o 00:01:51.724 CC module/bdev/malloc/bdev_malloc.o 00:01:51.724 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:51.724 CC module/blobfs/bdev/blobfs_bdev.o 00:01:51.724 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:51.724 CC module/bdev/split/vbdev_split.o 00:01:51.724 CC module/bdev/split/vbdev_split_rpc.o 00:01:51.724 CC module/bdev/aio/bdev_aio.o 00:01:51.724 CC module/bdev/aio/bdev_aio_rpc.o 00:01:51.724 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:51.724 CC module/bdev/ftl/bdev_ftl.o 00:01:51.724 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:51.724 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:51.724 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:51.724 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:51.724 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:51.724 CC module/bdev/nvme/bdev_nvme.o 00:01:51.724 CC module/bdev/passthru/vbdev_passthru.o 00:01:51.724 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:51.724 CC module/bdev/nvme/nvme_rpc.o 00:01:51.724 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:51.724 CC module/bdev/raid/bdev_raid.o 00:01:51.724 CC module/bdev/raid/bdev_raid_rpc.o 00:01:51.725 CC module/bdev/nvme/bdev_mdns_client.o 00:01:51.725 CC module/bdev/raid/bdev_raid_sb.o 00:01:51.725 CC module/bdev/nvme/vbdev_opal.o 00:01:51.725 CC module/bdev/raid/raid0.o 00:01:51.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:51.725 CC module/bdev/raid/raid1.o 00:01:51.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:51.725 CC module/bdev/raid/concat.o 00:01:51.725 CC module/bdev/iscsi/bdev_iscsi.o 00:01:51.725 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:51.725 SYMLINK libspdk_sock_posix.so 00:01:51.981 LIB libspdk_blobfs_bdev.a 00:01:51.981 LIB libspdk_bdev_null.a 00:01:51.981 LIB libspdk_bdev_error.a 00:01:51.981 SO libspdk_blobfs_bdev.so.6.0 00:01:51.981 SO libspdk_bdev_null.so.6.0 00:01:51.981 LIB libspdk_bdev_gpt.a 00:01:52.238 SO libspdk_bdev_error.so.6.0 00:01:52.238 SO libspdk_bdev_gpt.so.6.0 00:01:52.238 SYMLINK libspdk_blobfs_bdev.so 00:01:52.238 LIB libspdk_bdev_aio.a 00:01:52.238 SYMLINK libspdk_bdev_null.so 00:01:52.238 LIB libspdk_bdev_ftl.a 00:01:52.238 LIB libspdk_bdev_passthru.a 00:01:52.238 SYMLINK libspdk_bdev_error.so 00:01:52.238 SO libspdk_bdev_aio.so.6.0 00:01:52.238 SO libspdk_bdev_ftl.so.6.0 00:01:52.238 SYMLINK libspdk_bdev_gpt.so 00:01:52.238 LIB libspdk_bdev_malloc.a 00:01:52.238 SO libspdk_bdev_passthru.so.6.0 00:01:52.238 LIB libspdk_bdev_delay.a 00:01:52.238 LIB libspdk_bdev_lvol.a 00:01:52.238 LIB libspdk_bdev_iscsi.a 00:01:52.238 LIB libspdk_bdev_zone_block.a 00:01:52.238 SO libspdk_bdev_delay.so.6.0 00:01:52.238 SYMLINK libspdk_bdev_aio.so 00:01:52.238 SO libspdk_bdev_malloc.so.6.0 00:01:52.238 LIB libspdk_bdev_split.a 00:01:52.238 SYMLINK libspdk_bdev_passthru.so 00:01:52.238 SO libspdk_bdev_lvol.so.6.0 00:01:52.238 SYMLINK libspdk_bdev_ftl.so 00:01:52.238 SO libspdk_bdev_iscsi.so.6.0 00:01:52.238 SO libspdk_bdev_zone_block.so.6.0 00:01:52.238 SO libspdk_bdev_split.so.6.0 00:01:52.238 SYMLINK libspdk_bdev_delay.so 00:01:52.238 SYMLINK libspdk_bdev_malloc.so 00:01:52.238 SYMLINK libspdk_bdev_iscsi.so 00:01:52.238 SYMLINK libspdk_bdev_lvol.so 00:01:52.495 SYMLINK libspdk_bdev_zone_block.so 00:01:52.495 LIB libspdk_bdev_virtio.a 00:01:52.495 SYMLINK libspdk_bdev_split.so 00:01:52.495 SO libspdk_bdev_virtio.so.6.0 00:01:52.495 SYMLINK libspdk_bdev_virtio.so 00:01:52.754 LIB libspdk_bdev_raid.a 00:01:52.754 SO libspdk_bdev_raid.so.6.0 00:01:53.012 SYMLINK libspdk_bdev_raid.so 00:01:53.270 LIB libspdk_bdev_nvme.a 00:01:53.529 SO libspdk_bdev_nvme.so.7.0 00:01:53.529 SYMLINK libspdk_bdev_nvme.so 00:01:54.175 CC module/event/subsystems/iobuf/iobuf.o 00:01:54.175 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:54.175 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:54.175 CC module/event/subsystems/vmd/vmd.o 00:01:54.175 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:54.175 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:54.175 CC module/event/subsystems/sock/sock.o 00:01:54.175 CC module/event/subsystems/scheduler/scheduler.o 00:01:54.175 CC module/event/subsystems/keyring/keyring.o 00:01:54.434 LIB libspdk_event_keyring.a 00:01:54.434 LIB libspdk_event_scheduler.a 00:01:54.434 LIB libspdk_event_vfu_tgt.a 00:01:54.434 LIB libspdk_event_vmd.a 00:01:54.434 LIB libspdk_event_vhost_blk.a 00:01:54.434 LIB libspdk_event_sock.a 00:01:54.434 LIB libspdk_event_iobuf.a 00:01:54.434 SO libspdk_event_keyring.so.1.0 00:01:54.434 SO libspdk_event_vfu_tgt.so.3.0 00:01:54.434 SO libspdk_event_scheduler.so.4.0 00:01:54.434 SO libspdk_event_vhost_blk.so.3.0 00:01:54.434 SO libspdk_event_vmd.so.6.0 00:01:54.434 SO libspdk_event_sock.so.5.0 00:01:54.434 SO libspdk_event_iobuf.so.3.0 00:01:54.434 SYMLINK libspdk_event_keyring.so 00:01:54.434 SYMLINK libspdk_event_vfu_tgt.so 00:01:54.434 SYMLINK libspdk_event_scheduler.so 00:01:54.434 SYMLINK libspdk_event_sock.so 00:01:54.434 SYMLINK libspdk_event_vmd.so 00:01:54.434 SYMLINK libspdk_event_vhost_blk.so 00:01:54.434 SYMLINK libspdk_event_iobuf.so 00:01:55.002 CC module/event/subsystems/accel/accel.o 00:01:55.002 LIB libspdk_event_accel.a 00:01:55.262 SO libspdk_event_accel.so.6.0 00:01:55.262 SYMLINK libspdk_event_accel.so 00:01:55.521 CC module/event/subsystems/bdev/bdev.o 00:01:55.780 LIB libspdk_event_bdev.a 00:01:55.780 SO libspdk_event_bdev.so.6.0 00:01:55.780 SYMLINK libspdk_event_bdev.so 00:01:56.039 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:56.039 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:56.039 CC module/event/subsystems/nbd/nbd.o 00:01:56.039 CC module/event/subsystems/scsi/scsi.o 00:01:56.299 CC module/event/subsystems/ublk/ublk.o 00:01:56.299 LIB libspdk_event_nbd.a 00:01:56.299 LIB libspdk_event_scsi.a 00:01:56.299 LIB libspdk_event_ublk.a 00:01:56.299 SO libspdk_event_nbd.so.6.0 00:01:56.299 SO libspdk_event_scsi.so.6.0 00:01:56.299 SO libspdk_event_ublk.so.3.0 00:01:56.299 LIB libspdk_event_nvmf.a 00:01:56.299 SYMLINK libspdk_event_nbd.so 00:01:56.558 SYMLINK libspdk_event_scsi.so 00:01:56.558 SYMLINK libspdk_event_ublk.so 00:01:56.558 SO libspdk_event_nvmf.so.6.0 00:01:56.558 SYMLINK libspdk_event_nvmf.so 00:01:56.816 CC module/event/subsystems/iscsi/iscsi.o 00:01:56.816 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:56.816 LIB libspdk_event_vhost_scsi.a 00:01:57.075 LIB libspdk_event_iscsi.a 00:01:57.075 SO libspdk_event_vhost_scsi.so.3.0 00:01:57.075 SO libspdk_event_iscsi.so.6.0 00:01:57.075 SYMLINK libspdk_event_vhost_scsi.so 00:01:57.075 SYMLINK libspdk_event_iscsi.so 00:01:57.334 SO libspdk.so.6.0 00:01:57.334 SYMLINK libspdk.so 00:01:57.599 TEST_HEADER include/spdk/assert.h 00:01:57.599 TEST_HEADER include/spdk/accel.h 00:01:57.599 TEST_HEADER include/spdk/accel_module.h 00:01:57.599 CC app/trace_record/trace_record.o 00:01:57.599 TEST_HEADER include/spdk/barrier.h 00:01:57.599 TEST_HEADER include/spdk/base64.h 00:01:57.599 TEST_HEADER include/spdk/bdev.h 00:01:57.599 CXX app/trace/trace.o 00:01:57.599 TEST_HEADER include/spdk/bdev_module.h 00:01:57.599 TEST_HEADER include/spdk/bdev_zone.h 00:01:57.599 TEST_HEADER include/spdk/bit_pool.h 00:01:57.599 TEST_HEADER include/spdk/bit_array.h 00:01:57.599 TEST_HEADER include/spdk/blob_bdev.h 00:01:57.599 CC app/spdk_top/spdk_top.o 00:01:57.599 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:57.599 TEST_HEADER include/spdk/blobfs.h 00:01:57.599 TEST_HEADER include/spdk/blob.h 00:01:57.599 TEST_HEADER include/spdk/conf.h 00:01:57.599 TEST_HEADER include/spdk/cpuset.h 00:01:57.599 TEST_HEADER include/spdk/config.h 00:01:57.599 TEST_HEADER include/spdk/crc16.h 00:01:57.599 TEST_HEADER include/spdk/crc32.h 00:01:57.599 CC app/spdk_lspci/spdk_lspci.o 00:01:57.599 TEST_HEADER include/spdk/crc64.h 00:01:57.599 TEST_HEADER include/spdk/dif.h 00:01:57.599 CC app/spdk_nvme_identify/identify.o 00:01:57.599 TEST_HEADER include/spdk/dma.h 00:01:57.599 CC app/spdk_nvme_perf/perf.o 00:01:57.599 TEST_HEADER include/spdk/endian.h 00:01:57.599 TEST_HEADER include/spdk/env_dpdk.h 00:01:57.599 TEST_HEADER include/spdk/env.h 00:01:57.599 CC test/rpc_client/rpc_client_test.o 00:01:57.599 TEST_HEADER include/spdk/fd_group.h 00:01:57.599 TEST_HEADER include/spdk/event.h 00:01:57.599 TEST_HEADER include/spdk/fd.h 00:01:57.599 TEST_HEADER include/spdk/file.h 00:01:57.599 TEST_HEADER include/spdk/ftl.h 00:01:57.599 TEST_HEADER include/spdk/gpt_spec.h 00:01:57.599 CC app/spdk_nvme_discover/discovery_aer.o 00:01:57.599 TEST_HEADER include/spdk/hexlify.h 00:01:57.599 TEST_HEADER include/spdk/histogram_data.h 00:01:57.599 TEST_HEADER include/spdk/idxd.h 00:01:57.599 TEST_HEADER include/spdk/init.h 00:01:57.599 TEST_HEADER include/spdk/idxd_spec.h 00:01:57.599 TEST_HEADER include/spdk/ioat_spec.h 00:01:57.599 TEST_HEADER include/spdk/iscsi_spec.h 00:01:57.599 TEST_HEADER include/spdk/ioat.h 00:01:57.599 TEST_HEADER include/spdk/json.h 00:01:57.599 TEST_HEADER include/spdk/keyring.h 00:01:57.599 TEST_HEADER include/spdk/keyring_module.h 00:01:57.599 TEST_HEADER include/spdk/likely.h 00:01:57.599 TEST_HEADER include/spdk/jsonrpc.h 00:01:57.599 TEST_HEADER include/spdk/log.h 00:01:57.599 TEST_HEADER include/spdk/memory.h 00:01:57.599 TEST_HEADER include/spdk/mmio.h 00:01:57.599 TEST_HEADER include/spdk/nbd.h 00:01:57.599 TEST_HEADER include/spdk/lvol.h 00:01:57.599 TEST_HEADER include/spdk/nvme.h 00:01:57.599 TEST_HEADER include/spdk/notify.h 00:01:57.599 TEST_HEADER include/spdk/nvme_intel.h 00:01:57.599 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:57.599 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:57.599 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:57.599 TEST_HEADER include/spdk/nvme_zns.h 00:01:57.599 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:57.599 TEST_HEADER include/spdk/nvme_spec.h 00:01:57.599 TEST_HEADER include/spdk/nvmf.h 00:01:57.599 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:57.599 TEST_HEADER include/spdk/nvmf_spec.h 00:01:57.599 TEST_HEADER include/spdk/nvmf_transport.h 00:01:57.599 TEST_HEADER include/spdk/opal.h 00:01:57.599 TEST_HEADER include/spdk/opal_spec.h 00:01:57.599 TEST_HEADER include/spdk/pipe.h 00:01:57.599 TEST_HEADER include/spdk/pci_ids.h 00:01:57.599 TEST_HEADER include/spdk/reduce.h 00:01:57.599 TEST_HEADER include/spdk/queue.h 00:01:57.599 TEST_HEADER include/spdk/rpc.h 00:01:57.599 TEST_HEADER include/spdk/scheduler.h 00:01:57.599 TEST_HEADER include/spdk/scsi.h 00:01:57.599 TEST_HEADER include/spdk/sock.h 00:01:57.599 TEST_HEADER include/spdk/stdinc.h 00:01:57.599 TEST_HEADER include/spdk/scsi_spec.h 00:01:57.599 TEST_HEADER include/spdk/string.h 00:01:57.599 TEST_HEADER include/spdk/thread.h 00:01:57.599 CC app/nvmf_tgt/nvmf_main.o 00:01:57.599 TEST_HEADER include/spdk/trace.h 00:01:57.599 TEST_HEADER include/spdk/tree.h 00:01:57.599 TEST_HEADER include/spdk/trace_parser.h 00:01:57.599 TEST_HEADER include/spdk/ublk.h 00:01:57.599 TEST_HEADER include/spdk/util.h 00:01:57.599 TEST_HEADER include/spdk/uuid.h 00:01:57.599 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:57.599 TEST_HEADER include/spdk/version.h 00:01:57.599 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:57.599 TEST_HEADER include/spdk/vmd.h 00:01:57.599 TEST_HEADER include/spdk/xor.h 00:01:57.599 TEST_HEADER include/spdk/vhost.h 00:01:57.599 TEST_HEADER include/spdk/zipf.h 00:01:57.600 CXX test/cpp_headers/accel.o 00:01:57.600 CXX test/cpp_headers/accel_module.o 00:01:57.600 CXX test/cpp_headers/barrier.o 00:01:57.600 CXX test/cpp_headers/assert.o 00:01:57.600 CXX test/cpp_headers/bdev.o 00:01:57.600 CXX test/cpp_headers/base64.o 00:01:57.600 CXX test/cpp_headers/bdev_module.o 00:01:57.600 CXX test/cpp_headers/bdev_zone.o 00:01:57.600 CXX test/cpp_headers/bit_pool.o 00:01:57.600 CXX test/cpp_headers/bit_array.o 00:01:57.600 CXX test/cpp_headers/blob_bdev.o 00:01:57.600 CXX test/cpp_headers/blobfs_bdev.o 00:01:57.600 CXX test/cpp_headers/blobfs.o 00:01:57.600 CXX test/cpp_headers/blob.o 00:01:57.600 CXX test/cpp_headers/conf.o 00:01:57.600 CXX test/cpp_headers/cpuset.o 00:01:57.600 CXX test/cpp_headers/config.o 00:01:57.600 CXX test/cpp_headers/crc16.o 00:01:57.600 CXX test/cpp_headers/dif.o 00:01:57.600 CXX test/cpp_headers/crc32.o 00:01:57.600 CXX test/cpp_headers/dma.o 00:01:57.600 CXX test/cpp_headers/crc64.o 00:01:57.600 CXX test/cpp_headers/endian.o 00:01:57.600 CXX test/cpp_headers/env.o 00:01:57.600 CC app/spdk_dd/spdk_dd.o 00:01:57.600 CXX test/cpp_headers/env_dpdk.o 00:01:57.600 CXX test/cpp_headers/event.o 00:01:57.600 CXX test/cpp_headers/fd_group.o 00:01:57.600 CXX test/cpp_headers/fd.o 00:01:57.600 CXX test/cpp_headers/file.o 00:01:57.600 CXX test/cpp_headers/ftl.o 00:01:57.600 CXX test/cpp_headers/gpt_spec.o 00:01:57.600 CXX test/cpp_headers/hexlify.o 00:01:57.600 CXX test/cpp_headers/histogram_data.o 00:01:57.600 CXX test/cpp_headers/idxd.o 00:01:57.600 CXX test/cpp_headers/idxd_spec.o 00:01:57.600 CC app/spdk_tgt/spdk_tgt.o 00:01:57.600 CXX test/cpp_headers/ioat_spec.o 00:01:57.600 CXX test/cpp_headers/init.o 00:01:57.600 CXX test/cpp_headers/ioat.o 00:01:57.600 CXX test/cpp_headers/iscsi_spec.o 00:01:57.600 CXX test/cpp_headers/json.o 00:01:57.600 CXX test/cpp_headers/keyring.o 00:01:57.600 CXX test/cpp_headers/keyring_module.o 00:01:57.600 CXX test/cpp_headers/jsonrpc.o 00:01:57.600 CXX test/cpp_headers/likely.o 00:01:57.600 CXX test/cpp_headers/log.o 00:01:57.600 CXX test/cpp_headers/memory.o 00:01:57.600 CXX test/cpp_headers/lvol.o 00:01:57.600 CXX test/cpp_headers/mmio.o 00:01:57.600 CXX test/cpp_headers/nbd.o 00:01:57.600 CXX test/cpp_headers/notify.o 00:01:57.600 CXX test/cpp_headers/nvme.o 00:01:57.600 CXX test/cpp_headers/nvme_intel.o 00:01:57.600 CXX test/cpp_headers/nvme_ocssd.o 00:01:57.600 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:57.600 CXX test/cpp_headers/nvme_zns.o 00:01:57.600 CXX test/cpp_headers/nvmf_cmd.o 00:01:57.600 CXX test/cpp_headers/nvme_spec.o 00:01:57.600 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:57.600 CXX test/cpp_headers/nvmf.o 00:01:57.600 CXX test/cpp_headers/nvmf_spec.o 00:01:57.600 CXX test/cpp_headers/nvmf_transport.o 00:01:57.600 CXX test/cpp_headers/opal.o 00:01:57.600 CXX test/cpp_headers/opal_spec.o 00:01:57.600 CXX test/cpp_headers/pci_ids.o 00:01:57.600 CXX test/cpp_headers/pipe.o 00:01:57.876 CC app/iscsi_tgt/iscsi_tgt.o 00:01:57.876 CXX test/cpp_headers/queue.o 00:01:57.876 CXX test/cpp_headers/reduce.o 00:01:57.876 CXX test/cpp_headers/rpc.o 00:01:57.876 CXX test/cpp_headers/scsi.o 00:01:57.876 CXX test/cpp_headers/scheduler.o 00:01:57.876 CXX test/cpp_headers/scsi_spec.o 00:01:57.876 CXX test/cpp_headers/sock.o 00:01:57.876 CXX test/cpp_headers/stdinc.o 00:01:57.876 CXX test/cpp_headers/string.o 00:01:57.876 CXX test/cpp_headers/trace_parser.o 00:01:57.876 CXX test/cpp_headers/thread.o 00:01:57.876 CXX test/cpp_headers/trace.o 00:01:57.876 CXX test/cpp_headers/tree.o 00:01:57.876 CXX test/cpp_headers/ublk.o 00:01:57.876 CXX test/cpp_headers/util.o 00:01:57.876 CXX test/cpp_headers/uuid.o 00:01:57.876 CXX test/cpp_headers/version.o 00:01:57.876 CXX test/cpp_headers/vfio_user_pci.o 00:01:57.876 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:57.876 CC test/app/stub/stub.o 00:01:57.876 CC test/env/pci/pci_ut.o 00:01:57.876 CC examples/ioat/perf/perf.o 00:01:57.876 CXX test/cpp_headers/vfio_user_spec.o 00:01:57.876 CC test/env/memory/memory_ut.o 00:01:57.876 CC examples/util/zipf/zipf.o 00:01:57.876 CC test/env/vtophys/vtophys.o 00:01:57.876 CC test/dma/test_dma/test_dma.o 00:01:57.876 CC test/thread/poller_perf/poller_perf.o 00:01:57.876 CC examples/ioat/verify/verify.o 00:01:57.876 CC test/app/jsoncat/jsoncat.o 00:01:57.876 CC test/app/histogram_perf/histogram_perf.o 00:01:57.876 CXX test/cpp_headers/vhost.o 00:01:57.876 CXX test/cpp_headers/vmd.o 00:01:57.876 CC app/fio/nvme/fio_plugin.o 00:01:57.876 CC test/app/bdev_svc/bdev_svc.o 00:01:58.165 CC app/fio/bdev/fio_plugin.o 00:01:58.165 LINK spdk_lspci 00:01:58.430 CC test/env/mem_callbacks/mem_callbacks.o 00:01:58.430 LINK rpc_client_test 00:01:58.430 CXX test/cpp_headers/zipf.o 00:01:58.430 LINK interrupt_tgt 00:01:58.430 LINK spdk_nvme_discover 00:01:58.430 CXX test/cpp_headers/xor.o 00:01:58.430 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:58.430 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:58.688 LINK jsoncat 00:01:58.688 LINK nvmf_tgt 00:01:58.688 LINK histogram_perf 00:01:58.688 LINK spdk_trace_record 00:01:58.688 LINK env_dpdk_post_init 00:01:58.688 LINK zipf 00:01:58.688 LINK poller_perf 00:01:58.688 LINK spdk_tgt 00:01:58.688 LINK vtophys 00:01:58.688 LINK verify 00:01:58.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:58.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:58.688 LINK stub 00:01:58.688 LINK iscsi_tgt 00:01:58.688 LINK bdev_svc 00:01:58.688 LINK spdk_trace 00:01:58.688 LINK ioat_perf 00:01:58.947 LINK spdk_dd 00:01:58.947 LINK pci_ut 00:01:58.947 LINK test_dma 00:01:58.947 LINK spdk_nvme 00:01:59.205 CC examples/idxd/perf/perf.o 00:01:59.205 CC examples/sock/hello_world/hello_sock.o 00:01:59.205 CC test/event/reactor/reactor.o 00:01:59.205 CC examples/vmd/lsvmd/lsvmd.o 00:01:59.205 CC test/event/event_perf/event_perf.o 00:01:59.205 LINK vhost_fuzz 00:01:59.205 CC examples/vmd/led/led.o 00:01:59.205 LINK spdk_bdev 00:01:59.205 CC test/event/reactor_perf/reactor_perf.o 00:01:59.205 CC test/event/app_repeat/app_repeat.o 00:01:59.205 LINK nvme_fuzz 00:01:59.205 CC test/event/scheduler/scheduler.o 00:01:59.205 CC examples/thread/thread/thread_ex.o 00:01:59.205 LINK spdk_nvme_identify 00:01:59.205 CC app/vhost/vhost.o 00:01:59.205 LINK mem_callbacks 00:01:59.205 LINK spdk_nvme_perf 00:01:59.205 LINK spdk_top 00:01:59.205 LINK lsvmd 00:01:59.205 LINK event_perf 00:01:59.205 LINK reactor 00:01:59.205 LINK led 00:01:59.205 LINK reactor_perf 00:01:59.463 LINK app_repeat 00:01:59.463 LINK hello_sock 00:01:59.463 LINK memory_ut 00:01:59.463 LINK vhost 00:01:59.463 LINK scheduler 00:01:59.463 LINK idxd_perf 00:01:59.463 CC test/nvme/aer/aer.o 00:01:59.463 CC test/nvme/startup/startup.o 00:01:59.463 CC test/nvme/err_injection/err_injection.o 00:01:59.463 CC test/nvme/compliance/nvme_compliance.o 00:01:59.463 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:59.463 CC test/nvme/e2edp/nvme_dp.o 00:01:59.463 CC test/nvme/sgl/sgl.o 00:01:59.463 CC test/nvme/reset/reset.o 00:01:59.463 CC test/nvme/overhead/overhead.o 00:01:59.463 CC test/nvme/simple_copy/simple_copy.o 00:01:59.463 CC test/nvme/fdp/fdp.o 00:01:59.463 CC test/nvme/cuse/cuse.o 00:01:59.463 CC test/nvme/reserve/reserve.o 00:01:59.463 CC test/nvme/fused_ordering/fused_ordering.o 00:01:59.463 CC test/nvme/connect_stress/connect_stress.o 00:01:59.463 CC test/nvme/boot_partition/boot_partition.o 00:01:59.463 LINK thread 00:01:59.463 CC test/blobfs/mkfs/mkfs.o 00:01:59.463 CC test/accel/dif/dif.o 00:01:59.722 CC test/lvol/esnap/esnap.o 00:01:59.722 LINK startup 00:01:59.722 LINK reserve 00:01:59.722 LINK doorbell_aers 00:01:59.722 LINK boot_partition 00:01:59.722 LINK aer 00:01:59.722 LINK err_injection 00:01:59.722 LINK connect_stress 00:01:59.722 LINK fused_ordering 00:01:59.722 LINK simple_copy 00:01:59.722 LINK sgl 00:01:59.722 LINK nvme_dp 00:01:59.722 LINK mkfs 00:01:59.981 LINK nvme_compliance 00:01:59.981 CC examples/nvme/abort/abort.o 00:01:59.981 CC examples/nvme/reconnect/reconnect.o 00:01:59.981 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:59.981 CC examples/nvme/hotplug/hotplug.o 00:01:59.981 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:59.981 CC examples/nvme/hello_world/hello_world.o 00:01:59.981 CC examples/nvme/arbitration/arbitration.o 00:01:59.981 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:59.981 LINK fdp 00:01:59.981 LINK dif 00:01:59.981 LINK reset 00:01:59.981 LINK cmb_copy 00:01:59.981 CC examples/accel/perf/accel_perf.o 00:02:00.239 LINK pmr_persistence 00:02:00.239 LINK overhead 00:02:00.239 CC examples/blob/cli/blobcli.o 00:02:00.239 LINK arbitration 00:02:00.239 LINK hello_world 00:02:00.239 LINK hotplug 00:02:00.239 CC examples/blob/hello_world/hello_blob.o 00:02:00.239 LINK abort 00:02:00.239 LINK reconnect 00:02:00.239 LINK iscsi_fuzz 00:02:00.497 LINK nvme_manage 00:02:00.497 LINK hello_blob 00:02:00.497 LINK accel_perf 00:02:00.497 CC test/bdev/bdevio/bdevio.o 00:02:00.754 LINK blobcli 00:02:00.754 LINK cuse 00:02:01.012 LINK bdevio 00:02:01.012 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.270 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.528 LINK hello_bdev 00:02:01.786 LINK bdevperf 00:02:02.722 CC examples/nvmf/nvmf/nvmf.o 00:02:02.722 LINK nvmf 00:02:04.626 LINK esnap 00:02:05.193 00:02:05.193 real 0m52.202s 00:02:05.193 user 8m31.158s 00:02:05.193 sys 4m16.021s 00:02:05.193 12:31:56 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:05.193 12:31:56 make -- common/autotest_common.sh@10 -- $ set +x 00:02:05.193 ************************************ 00:02:05.193 END TEST make 00:02:05.193 ************************************ 00:02:05.193 12:31:56 -- common/autotest_common.sh@1142 -- $ return 0 00:02:05.193 12:31:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:05.193 12:31:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:05.193 12:31:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:05.193 12:31:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:05.193 12:31:56 -- pm/common@44 -- $ pid=3615576 00:02:05.193 12:31:56 -- pm/common@50 -- $ kill -TERM 3615576 00:02:05.193 12:31:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:05.193 12:31:56 -- pm/common@44 -- $ pid=3615578 00:02:05.193 12:31:56 -- pm/common@50 -- $ kill -TERM 3615578 00:02:05.193 12:31:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:05.193 12:31:56 -- pm/common@44 -- $ pid=3615579 00:02:05.193 12:31:56 -- pm/common@50 -- $ kill -TERM 3615579 00:02:05.193 12:31:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:05.193 12:31:56 -- pm/common@44 -- $ pid=3615603 00:02:05.193 12:31:56 -- pm/common@50 -- $ sudo -E kill -TERM 3615603 00:02:05.193 12:31:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:05.193 12:31:56 -- nvmf/common.sh@7 -- # uname -s 00:02:05.193 12:31:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:05.193 12:31:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:05.193 12:31:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:05.193 12:31:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:05.193 12:31:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:05.193 12:31:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:05.193 12:31:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:05.193 12:31:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:05.193 12:31:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:05.193 12:31:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:05.193 12:31:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:05.193 12:31:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:05.193 12:31:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:05.193 12:31:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:05.193 12:31:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:05.193 12:31:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:05.193 12:31:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:05.193 12:31:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:05.193 12:31:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.193 12:31:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.193 12:31:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.193 12:31:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.193 12:31:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.193 12:31:57 -- paths/export.sh@5 -- # export PATH 00:02:05.193 12:31:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.193 12:31:57 -- nvmf/common.sh@47 -- # : 0 00:02:05.193 12:31:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:05.193 12:31:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:05.193 12:31:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:05.193 12:31:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:05.193 12:31:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:05.193 12:31:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:05.193 12:31:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:05.193 12:31:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:05.193 12:31:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:05.193 12:31:57 -- spdk/autotest.sh@32 -- # uname -s 00:02:05.193 12:31:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:05.193 12:31:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:05.193 12:31:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:05.193 12:31:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:05.193 12:31:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:05.193 12:31:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:05.193 12:31:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:05.193 12:31:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:05.193 12:31:57 -- spdk/autotest.sh@48 -- # udevadm_pid=3677869 00:02:05.193 12:31:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:05.193 12:31:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:05.193 12:31:57 -- pm/common@17 -- # local monitor 00:02:05.193 12:31:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:57 -- pm/common@21 -- # date +%s 00:02:05.193 12:31:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.193 12:31:57 -- pm/common@21 -- # date +%s 00:02:05.193 12:31:57 -- pm/common@25 -- # sleep 1 00:02:05.193 12:31:57 -- pm/common@21 -- # date +%s 00:02:05.193 12:31:57 -- pm/common@21 -- # date +%s 00:02:05.193 12:31:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039517 00:02:05.193 12:31:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039517 00:02:05.193 12:31:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039517 00:02:05.193 12:31:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039517 00:02:05.193 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039517_collect-vmstat.pm.log 00:02:05.193 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039517_collect-cpu-load.pm.log 00:02:05.193 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039517_collect-cpu-temp.pm.log 00:02:05.193 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039517_collect-bmc-pm.bmc.pm.log 00:02:06.127 12:31:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:06.127 12:31:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:06.127 12:31:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:06.127 12:31:58 -- common/autotest_common.sh@10 -- # set +x 00:02:06.127 12:31:58 -- spdk/autotest.sh@59 -- # create_test_list 00:02:06.127 12:31:58 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:06.127 12:31:58 -- common/autotest_common.sh@10 -- # set +x 00:02:06.386 12:31:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:06.386 12:31:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.386 12:31:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.386 12:31:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:06.386 12:31:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.386 12:31:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:06.386 12:31:58 -- common/autotest_common.sh@1455 -- # uname 00:02:06.387 12:31:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:06.387 12:31:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:06.387 12:31:58 -- common/autotest_common.sh@1475 -- # uname 00:02:06.387 12:31:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:06.387 12:31:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:06.387 12:31:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:06.387 12:31:58 -- spdk/autotest.sh@72 -- # hash lcov 00:02:06.387 12:31:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:06.387 12:31:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:06.387 --rc lcov_branch_coverage=1 00:02:06.387 --rc lcov_function_coverage=1 00:02:06.387 --rc genhtml_branch_coverage=1 00:02:06.387 --rc genhtml_function_coverage=1 00:02:06.387 --rc genhtml_legend=1 00:02:06.387 --rc geninfo_all_blocks=1 00:02:06.387 ' 00:02:06.387 12:31:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:06.387 --rc lcov_branch_coverage=1 00:02:06.387 --rc lcov_function_coverage=1 00:02:06.387 --rc genhtml_branch_coverage=1 00:02:06.387 --rc genhtml_function_coverage=1 00:02:06.387 --rc genhtml_legend=1 00:02:06.387 --rc geninfo_all_blocks=1 00:02:06.387 ' 00:02:06.387 12:31:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:06.387 --rc lcov_branch_coverage=1 00:02:06.387 --rc lcov_function_coverage=1 00:02:06.387 --rc genhtml_branch_coverage=1 00:02:06.387 --rc genhtml_function_coverage=1 00:02:06.387 --rc genhtml_legend=1 00:02:06.387 --rc geninfo_all_blocks=1 00:02:06.387 --no-external' 00:02:06.387 12:31:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:06.387 --rc lcov_branch_coverage=1 00:02:06.387 --rc lcov_function_coverage=1 00:02:06.387 --rc genhtml_branch_coverage=1 00:02:06.387 --rc genhtml_function_coverage=1 00:02:06.387 --rc genhtml_legend=1 00:02:06.387 --rc geninfo_all_blocks=1 00:02:06.387 --no-external' 00:02:06.387 12:31:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:06.387 lcov: LCOV version 1.14 00:02:06.387 12:31:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:11.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:11.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:11.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:12.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:12.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:12.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:12.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:12.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:12.434 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:34.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:34.389 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:40.950 12:32:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:40.950 12:32:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:40.950 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:02:40.950 12:32:32 -- spdk/autotest.sh@91 -- # rm -f 00:02:40.950 12:32:32 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.236 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:02:44.236 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.236 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.236 12:32:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:44.236 12:32:35 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:44.236 12:32:35 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:44.236 12:32:35 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:44.236 12:32:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.236 12:32:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:44.236 12:32:35 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:44.236 12:32:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.236 12:32:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.236 12:32:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:44.236 12:32:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.236 12:32:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:44.236 12:32:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:44.236 12:32:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:44.236 12:32:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.236 No valid GPT data, bailing 00:02:44.236 12:32:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.236 12:32:36 -- scripts/common.sh@391 -- # pt= 00:02:44.236 12:32:36 -- scripts/common.sh@392 -- # return 1 00:02:44.236 12:32:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.236 1+0 records in 00:02:44.236 1+0 records out 00:02:44.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589168 s, 178 MB/s 00:02:44.236 12:32:36 -- spdk/autotest.sh@118 -- # sync 00:02:44.236 12:32:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.236 12:32:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.236 12:32:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.804 12:32:41 -- spdk/autotest.sh@124 -- # uname -s 00:02:50.804 12:32:41 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:50.804 12:32:41 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.804 12:32:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.804 12:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.804 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:02:50.804 ************************************ 00:02:50.804 START TEST setup.sh 00:02:50.804 ************************************ 00:02:50.804 12:32:41 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.804 * Looking for test storage... 00:02:50.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.804 12:32:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.804 12:32:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.804 12:32:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.804 12:32:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.804 12:32:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.804 12:32:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.804 ************************************ 00:02:50.804 START TEST acl 00:02:50.804 ************************************ 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.805 * Looking for test storage... 00:02:50.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.805 12:32:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.805 12:32:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.805 12:32:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.805 12:32:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.403 12:32:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.403 12:32:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:53.403 12:32:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.403 12:32:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:53.403 12:32:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.403 12:32:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:55.952 Hugepages 00:02:55.952 node hugesize free / total 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.211 00:02:56.211 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.211 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.212 12:32:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.212 12:32:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.212 12:32:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.212 12:32:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.212 12:32:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.212 ************************************ 00:02:56.212 START TEST denied 00:02:56.212 ************************************ 00:02:56.212 12:32:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:56.212 12:32:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:02:56.212 12:32:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:56.212 12:32:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:02:56.212 12:32:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.212 12:32:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.501 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.501 12:32:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.691 00:03:03.691 real 0m7.181s 00:03:03.691 user 0m2.275s 00:03:03.691 sys 0m4.154s 00:03:03.691 12:32:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.691 12:32:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:03.691 ************************************ 00:03:03.691 END TEST denied 00:03:03.691 ************************************ 00:03:03.691 12:32:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:03.691 12:32:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:03.691 12:32:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.691 12:32:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.691 12:32:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.691 ************************************ 00:03:03.691 START TEST allowed 00:03:03.691 ************************************ 00:03:03.691 12:32:55 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:03.691 12:32:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:03.691 12:32:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:03.691 12:32:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.691 12:32:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.691 12:32:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:07.892 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.892 12:32:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:07.892 12:32:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:07.892 12:32:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:07.892 12:32:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.892 12:32:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.183 00:03:11.183 real 0m7.261s 00:03:11.183 user 0m2.239s 00:03:11.183 sys 0m4.111s 00:03:11.183 12:33:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.183 12:33:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.183 ************************************ 00:03:11.183 END TEST allowed 00:03:11.183 ************************************ 00:03:11.183 12:33:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:11.183 00:03:11.183 real 0m20.597s 00:03:11.183 user 0m6.759s 00:03:11.183 sys 0m12.368s 00:03:11.183 12:33:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.183 12:33:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.183 ************************************ 00:03:11.183 END TEST acl 00:03:11.183 ************************************ 00:03:11.183 12:33:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:11.183 12:33:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.183 12:33:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.183 12:33:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.183 12:33:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.183 ************************************ 00:03:11.183 START TEST hugepages 00:03:11.183 ************************************ 00:03:11.183 12:33:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.183 * Looking for test storage... 00:03:11.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69554140 kB' 'MemAvailable: 73017948 kB' 'Buffers: 2704 kB' 'Cached: 14476764 kB' 'SwapCached: 0 kB' 'Active: 11631940 kB' 'Inactive: 3528960 kB' 'Active(anon): 11179628 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 684872 kB' 'Mapped: 203848 kB' 'Shmem: 10498196 kB' 'KReclaimable: 271864 kB' 'Slab: 912144 kB' 'SReclaimable: 271864 kB' 'SUnreclaim: 640280 kB' 'KernelStack: 22816 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12627828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220236 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.183 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:11.184 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.185 12:33:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.185 12:33:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.185 12:33:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.185 12:33:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.185 ************************************ 00:03:11.185 START TEST default_setup 00:03:11.185 ************************************ 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.185 12:33:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.470 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.470 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.038 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71715012 kB' 'MemAvailable: 75178788 kB' 'Buffers: 2704 kB' 'Cached: 14476880 kB' 'SwapCached: 0 kB' 'Active: 11650556 kB' 'Inactive: 3528960 kB' 'Active(anon): 11198244 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703196 kB' 'Mapped: 203908 kB' 'Shmem: 10498312 kB' 'KReclaimable: 271800 kB' 'Slab: 909796 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 637996 kB' 'KernelStack: 23024 kB' 'PageTables: 9512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12648544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220316 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.038 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71714732 kB' 'MemAvailable: 75178508 kB' 'Buffers: 2704 kB' 'Cached: 14476880 kB' 'SwapCached: 0 kB' 'Active: 11650332 kB' 'Inactive: 3528960 kB' 'Active(anon): 11198020 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702956 kB' 'Mapped: 203804 kB' 'Shmem: 10498312 kB' 'KReclaimable: 271800 kB' 'Slab: 909764 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 637964 kB' 'KernelStack: 23056 kB' 'PageTables: 9736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12648564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220220 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.039 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71712872 kB' 'MemAvailable: 75176648 kB' 'Buffers: 2704 kB' 'Cached: 14476900 kB' 'SwapCached: 0 kB' 'Active: 11649792 kB' 'Inactive: 3528960 kB' 'Active(anon): 11197480 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702920 kB' 'Mapped: 203804 kB' 'Shmem: 10498332 kB' 'KReclaimable: 271800 kB' 'Slab: 909772 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 637972 kB' 'KernelStack: 22976 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12648584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220364 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.040 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.041 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.303 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.304 nr_hugepages=1024 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.304 resv_hugepages=0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.304 surplus_hugepages=0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.304 anon_hugepages=0 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.304 12:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71712424 kB' 'MemAvailable: 75176200 kB' 'Buffers: 2704 kB' 'Cached: 14476924 kB' 'SwapCached: 0 kB' 'Active: 11650020 kB' 'Inactive: 3528960 kB' 'Active(anon): 11197708 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702656 kB' 'Mapped: 203804 kB' 'Shmem: 10498356 kB' 'KReclaimable: 271800 kB' 'Slab: 909772 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 637972 kB' 'KernelStack: 23040 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12648608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220396 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.304 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.305 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40872356 kB' 'MemUsed: 7196040 kB' 'SwapCached: 0 kB' 'Active: 3812776 kB' 'Inactive: 228300 kB' 'Active(anon): 3686256 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921248 kB' 'Mapped: 31728 kB' 'AnonPages: 123052 kB' 'Shmem: 3566428 kB' 'KernelStack: 11544 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123476 kB' 'Slab: 404200 kB' 'SReclaimable: 123476 kB' 'SUnreclaim: 280724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.306 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.307 node0=1024 expecting 1024 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.307 00:03:15.307 real 0m4.143s 00:03:15.307 user 0m1.389s 00:03:15.307 sys 0m1.998s 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.307 12:33:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:15.307 ************************************ 00:03:15.307 END TEST default_setup 00:03:15.307 ************************************ 00:03:15.307 12:33:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.307 12:33:07 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.308 12:33:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.308 12:33:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.308 12:33:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.308 ************************************ 00:03:15.308 START TEST per_node_1G_alloc 00:03:15.308 ************************************ 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.308 12:33:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.604 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.604 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.604 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71685452 kB' 'MemAvailable: 75149228 kB' 'Buffers: 2704 kB' 'Cached: 14477016 kB' 'SwapCached: 0 kB' 'Active: 11647292 kB' 'Inactive: 3528960 kB' 'Active(anon): 11194980 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 699708 kB' 'Mapped: 203036 kB' 'Shmem: 10498448 kB' 'KReclaimable: 271800 kB' 'Slab: 909732 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 637932 kB' 'KernelStack: 23168 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12641464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220472 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.604 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.605 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71684796 kB' 'MemAvailable: 75148572 kB' 'Buffers: 2704 kB' 'Cached: 14477020 kB' 'SwapCached: 0 kB' 'Active: 11645948 kB' 'Inactive: 3528960 kB' 'Active(anon): 11193636 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 698792 kB' 'Mapped: 202948 kB' 'Shmem: 10498452 kB' 'KReclaimable: 271800 kB' 'Slab: 909828 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 638028 kB' 'KernelStack: 23088 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12639540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220472 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.606 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.607 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71680132 kB' 'MemAvailable: 75143908 kB' 'Buffers: 2704 kB' 'Cached: 14477036 kB' 'SwapCached: 0 kB' 'Active: 11645960 kB' 'Inactive: 3528960 kB' 'Active(anon): 11193648 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 698668 kB' 'Mapped: 202916 kB' 'Shmem: 10498468 kB' 'KReclaimable: 271800 kB' 'Slab: 909820 kB' 'SReclaimable: 271800 kB' 'SUnreclaim: 638020 kB' 'KernelStack: 22848 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12638296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220312 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.608 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.609 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.610 nr_hugepages=1024 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.610 resv_hugepages=0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.610 surplus_hugepages=0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.610 anon_hugepages=0 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71680028 kB' 'MemAvailable: 75143788 kB' 'Buffers: 2704 kB' 'Cached: 14477080 kB' 'SwapCached: 0 kB' 'Active: 11644748 kB' 'Inactive: 3528960 kB' 'Active(anon): 11192436 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 697476 kB' 'Mapped: 202912 kB' 'Shmem: 10498512 kB' 'KReclaimable: 271768 kB' 'Slab: 909724 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637956 kB' 'KernelStack: 22784 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12638448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220312 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.610 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.611 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41929044 kB' 'MemUsed: 6139352 kB' 'SwapCached: 0 kB' 'Active: 3805552 kB' 'Inactive: 228300 kB' 'Active(anon): 3679032 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921368 kB' 'Mapped: 31720 kB' 'AnonPages: 115708 kB' 'Shmem: 3566548 kB' 'KernelStack: 11304 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123444 kB' 'Slab: 404252 kB' 'SReclaimable: 123444 kB' 'SUnreclaim: 280808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.612 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29751128 kB' 'MemUsed: 14467080 kB' 'SwapCached: 0 kB' 'Active: 7838816 kB' 'Inactive: 3300660 kB' 'Active(anon): 7513024 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10558436 kB' 'Mapped: 171192 kB' 'AnonPages: 581332 kB' 'Shmem: 6931984 kB' 'KernelStack: 11464 kB' 'PageTables: 6020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148324 kB' 'Slab: 505472 kB' 'SReclaimable: 148324 kB' 'SUnreclaim: 357148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.613 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.614 node0=512 expecting 512 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.614 node1=512 expecting 512 00:03:18.614 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.614 00:03:18.614 real 0m3.188s 00:03:18.614 user 0m1.271s 00:03:18.614 sys 0m1.959s 00:03:18.615 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.615 12:33:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.615 ************************************ 00:03:18.615 END TEST per_node_1G_alloc 00:03:18.615 ************************************ 00:03:18.615 12:33:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.615 12:33:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.615 12:33:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.615 12:33:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.615 12:33:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.615 ************************************ 00:03:18.615 START TEST even_2G_alloc 00:03:18.615 ************************************ 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.615 12:33:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.148 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.148 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.409 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.409 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.410 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71707428 kB' 'MemAvailable: 75171188 kB' 'Buffers: 2704 kB' 'Cached: 14477180 kB' 'SwapCached: 0 kB' 'Active: 11639700 kB' 'Inactive: 3528960 kB' 'Active(anon): 11187388 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692008 kB' 'Mapped: 201776 kB' 'Shmem: 10498612 kB' 'KReclaimable: 271768 kB' 'Slab: 909256 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637488 kB' 'KernelStack: 22736 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12626532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.410 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71708476 kB' 'MemAvailable: 75172236 kB' 'Buffers: 2704 kB' 'Cached: 14477184 kB' 'SwapCached: 0 kB' 'Active: 11640116 kB' 'Inactive: 3528960 kB' 'Active(anon): 11187804 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692400 kB' 'Mapped: 201776 kB' 'Shmem: 10498616 kB' 'KReclaimable: 271768 kB' 'Slab: 909304 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637536 kB' 'KernelStack: 22768 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12626548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.411 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.412 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71708332 kB' 'MemAvailable: 75172092 kB' 'Buffers: 2704 kB' 'Cached: 14477200 kB' 'SwapCached: 0 kB' 'Active: 11640140 kB' 'Inactive: 3528960 kB' 'Active(anon): 11187828 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692400 kB' 'Mapped: 201776 kB' 'Shmem: 10498632 kB' 'KReclaimable: 271768 kB' 'Slab: 909304 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637536 kB' 'KernelStack: 22768 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12626568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.413 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.414 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.675 nr_hugepages=1024 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.675 resv_hugepages=0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.675 surplus_hugepages=0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.675 anon_hugepages=0 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71708540 kB' 'MemAvailable: 75172300 kB' 'Buffers: 2704 kB' 'Cached: 14477240 kB' 'SwapCached: 0 kB' 'Active: 11639812 kB' 'Inactive: 3528960 kB' 'Active(anon): 11187500 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692020 kB' 'Mapped: 201776 kB' 'Shmem: 10498672 kB' 'KReclaimable: 271768 kB' 'Slab: 909304 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637536 kB' 'KernelStack: 22752 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12626592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41940168 kB' 'MemUsed: 6128228 kB' 'SwapCached: 0 kB' 'Active: 3805860 kB' 'Inactive: 228300 kB' 'Active(anon): 3679340 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921512 kB' 'Mapped: 31328 kB' 'AnonPages: 115840 kB' 'Shmem: 3566692 kB' 'KernelStack: 11400 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123444 kB' 'Slab: 403796 kB' 'SReclaimable: 123444 kB' 'SUnreclaim: 280352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29768372 kB' 'MemUsed: 14449836 kB' 'SwapCached: 0 kB' 'Active: 7834328 kB' 'Inactive: 3300660 kB' 'Active(anon): 7508536 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10558436 kB' 'Mapped: 170448 kB' 'AnonPages: 576556 kB' 'Shmem: 6931984 kB' 'KernelStack: 11368 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148324 kB' 'Slab: 505508 kB' 'SReclaimable: 148324 kB' 'SUnreclaim: 357184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.677 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.677 node0=512 expecting 512 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.678 node1=512 expecting 512 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.678 00:03:21.678 real 0m3.068s 00:03:21.678 user 0m1.228s 00:03:21.678 sys 0m1.879s 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.678 12:33:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.678 ************************************ 00:03:21.678 END TEST even_2G_alloc 00:03:21.678 ************************************ 00:03:21.678 12:33:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.678 12:33:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:21.678 12:33:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.678 12:33:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.678 12:33:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.678 ************************************ 00:03:21.678 START TEST odd_alloc 00:03:21.678 ************************************ 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.678 12:33:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.972 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.972 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.972 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.972 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71725624 kB' 'MemAvailable: 75189384 kB' 'Buffers: 2704 kB' 'Cached: 14477348 kB' 'SwapCached: 0 kB' 'Active: 11641264 kB' 'Inactive: 3528960 kB' 'Active(anon): 11188952 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692964 kB' 'Mapped: 202308 kB' 'Shmem: 10498780 kB' 'KReclaimable: 271768 kB' 'Slab: 909416 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637648 kB' 'KernelStack: 22800 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12627624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.974 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71726820 kB' 'MemAvailable: 75190580 kB' 'Buffers: 2704 kB' 'Cached: 14477360 kB' 'SwapCached: 0 kB' 'Active: 11640880 kB' 'Inactive: 3528960 kB' 'Active(anon): 11188568 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693016 kB' 'Mapped: 201788 kB' 'Shmem: 10498792 kB' 'KReclaimable: 271768 kB' 'Slab: 909424 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637656 kB' 'KernelStack: 22784 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12627644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71726820 kB' 'MemAvailable: 75190580 kB' 'Buffers: 2704 kB' 'Cached: 14477364 kB' 'SwapCached: 0 kB' 'Active: 11640556 kB' 'Inactive: 3528960 kB' 'Active(anon): 11188244 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692728 kB' 'Mapped: 201788 kB' 'Shmem: 10498796 kB' 'KReclaimable: 271768 kB' 'Slab: 909424 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637656 kB' 'KernelStack: 22784 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12627664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.984 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:24.985 nr_hugepages=1025 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.985 resv_hugepages=0 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.985 surplus_hugepages=0 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.985 anon_hugepages=0 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71728508 kB' 'MemAvailable: 75192268 kB' 'Buffers: 2704 kB' 'Cached: 14477408 kB' 'SwapCached: 0 kB' 'Active: 11640564 kB' 'Inactive: 3528960 kB' 'Active(anon): 11188252 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692624 kB' 'Mapped: 201788 kB' 'Shmem: 10498840 kB' 'KReclaimable: 271768 kB' 'Slab: 909416 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 637648 kB' 'KernelStack: 22768 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12627684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.985 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41935900 kB' 'MemUsed: 6132496 kB' 'SwapCached: 0 kB' 'Active: 3806160 kB' 'Inactive: 228300 kB' 'Active(anon): 3679640 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921608 kB' 'Mapped: 31320 kB' 'AnonPages: 116056 kB' 'Shmem: 3566788 kB' 'KernelStack: 11352 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123444 kB' 'Slab: 403956 kB' 'SReclaimable: 123444 kB' 'SUnreclaim: 280512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.986 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.987 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29792608 kB' 'MemUsed: 14425600 kB' 'SwapCached: 0 kB' 'Active: 7834712 kB' 'Inactive: 3300660 kB' 'Active(anon): 7508920 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10558528 kB' 'Mapped: 170468 kB' 'AnonPages: 576952 kB' 'Shmem: 6932076 kB' 'KernelStack: 11432 kB' 'PageTables: 5964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148324 kB' 'Slab: 505460 kB' 'SReclaimable: 148324 kB' 'SUnreclaim: 357136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.988 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:24.989 node0=512 expecting 513 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:24.989 node1=513 expecting 512 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:24.989 00:03:24.989 real 0m3.056s 00:03:24.989 user 0m1.226s 00:03:24.989 sys 0m1.870s 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.989 12:33:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.989 ************************************ 00:03:24.989 END TEST odd_alloc 00:03:24.989 ************************************ 00:03:24.989 12:33:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.989 12:33:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:24.989 12:33:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.989 12:33:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.989 12:33:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.989 ************************************ 00:03:24.989 START TEST custom_alloc 00:03:24.989 ************************************ 00:03:24.989 12:33:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.990 12:33:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.524 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.524 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.524 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70721312 kB' 'MemAvailable: 74185072 kB' 'Buffers: 2704 kB' 'Cached: 14477496 kB' 'SwapCached: 0 kB' 'Active: 11642772 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190460 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694784 kB' 'Mapped: 202308 kB' 'Shmem: 10498928 kB' 'KReclaimable: 271768 kB' 'Slab: 909968 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 638200 kB' 'KernelStack: 22704 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12630704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220296 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.791 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70716372 kB' 'MemAvailable: 74180132 kB' 'Buffers: 2704 kB' 'Cached: 14477500 kB' 'SwapCached: 0 kB' 'Active: 11646580 kB' 'Inactive: 3528960 kB' 'Active(anon): 11194268 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 698676 kB' 'Mapped: 202652 kB' 'Shmem: 10498932 kB' 'KReclaimable: 271768 kB' 'Slab: 910028 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 638260 kB' 'KernelStack: 22704 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12634300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220252 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:27.792 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.793 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70719700 kB' 'MemAvailable: 74183460 kB' 'Buffers: 2704 kB' 'Cached: 14477528 kB' 'SwapCached: 0 kB' 'Active: 11641468 kB' 'Inactive: 3528960 kB' 'Active(anon): 11189156 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693596 kB' 'Mapped: 201804 kB' 'Shmem: 10498960 kB' 'KReclaimable: 271768 kB' 'Slab: 910032 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 638264 kB' 'KernelStack: 22768 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12629088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.794 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.795 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:27.796 nr_hugepages=1536 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.796 resv_hugepages=0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.796 surplus_hugepages=0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.796 anon_hugepages=0 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70720132 kB' 'MemAvailable: 74183892 kB' 'Buffers: 2704 kB' 'Cached: 14477560 kB' 'SwapCached: 0 kB' 'Active: 11640836 kB' 'Inactive: 3528960 kB' 'Active(anon): 11188524 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692840 kB' 'Mapped: 201804 kB' 'Shmem: 10498992 kB' 'KReclaimable: 271768 kB' 'Slab: 910032 kB' 'SReclaimable: 271768 kB' 'SUnreclaim: 638264 kB' 'KernelStack: 22704 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12628224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.796 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.797 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41965352 kB' 'MemUsed: 6103044 kB' 'SwapCached: 0 kB' 'Active: 3804964 kB' 'Inactive: 228300 kB' 'Active(anon): 3678444 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921628 kB' 'Mapped: 31320 kB' 'AnonPages: 114784 kB' 'Shmem: 3566808 kB' 'KernelStack: 11320 kB' 'PageTables: 3168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123444 kB' 'Slab: 404392 kB' 'SReclaimable: 123444 kB' 'SUnreclaim: 280948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.798 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.058 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 28756216 kB' 'MemUsed: 15461992 kB' 'SwapCached: 0 kB' 'Active: 7835028 kB' 'Inactive: 3300660 kB' 'Active(anon): 7509236 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3300660 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10558656 kB' 'Mapped: 170484 kB' 'AnonPages: 577248 kB' 'Shmem: 6932204 kB' 'KernelStack: 11368 kB' 'PageTables: 5732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148324 kB' 'Slab: 505640 kB' 'SReclaimable: 148324 kB' 'SUnreclaim: 357316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.059 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.060 node0=512 expecting 512 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.060 node1=1024 expecting 1024 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.060 00:03:28.060 real 0m3.098s 00:03:28.060 user 0m1.239s 00:03:28.060 sys 0m1.899s 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.060 12:33:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.060 ************************************ 00:03:28.060 END TEST custom_alloc 00:03:28.060 ************************************ 00:03:28.060 12:33:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.060 12:33:19 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.060 12:33:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.060 12:33:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.060 12:33:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.060 ************************************ 00:03:28.060 START TEST no_shrink_alloc 00:03:28.060 ************************************ 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.060 12:33:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.598 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.598 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.598 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.598 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.598 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.598 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.860 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71773028 kB' 'MemAvailable: 75236772 kB' 'Buffers: 2704 kB' 'Cached: 14477656 kB' 'SwapCached: 0 kB' 'Active: 11643028 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190716 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695196 kB' 'Mapped: 201920 kB' 'Shmem: 10499088 kB' 'KReclaimable: 271736 kB' 'Slab: 909700 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637964 kB' 'KernelStack: 22720 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71774368 kB' 'MemAvailable: 75238112 kB' 'Buffers: 2704 kB' 'Cached: 14477660 kB' 'SwapCached: 0 kB' 'Active: 11642636 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190324 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694552 kB' 'Mapped: 201820 kB' 'Shmem: 10499092 kB' 'KReclaimable: 271736 kB' 'Slab: 909628 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637892 kB' 'KernelStack: 22736 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.862 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71774780 kB' 'MemAvailable: 75238524 kB' 'Buffers: 2704 kB' 'Cached: 14477660 kB' 'SwapCached: 0 kB' 'Active: 11642332 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190020 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694248 kB' 'Mapped: 201820 kB' 'Shmem: 10499092 kB' 'KReclaimable: 271736 kB' 'Slab: 909628 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637892 kB' 'KernelStack: 22736 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.864 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.865 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.866 nr_hugepages=1024 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.866 resv_hugepages=0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.866 surplus_hugepages=0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.866 anon_hugepages=0 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.866 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71775040 kB' 'MemAvailable: 75238784 kB' 'Buffers: 2704 kB' 'Cached: 14477660 kB' 'SwapCached: 0 kB' 'Active: 11643912 kB' 'Inactive: 3528960 kB' 'Active(anon): 11191600 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695808 kB' 'Mapped: 202324 kB' 'Shmem: 10499092 kB' 'KReclaimable: 271736 kB' 'Slab: 909628 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637892 kB' 'KernelStack: 22704 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12631512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.129 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40922272 kB' 'MemUsed: 7146124 kB' 'SwapCached: 0 kB' 'Active: 3812216 kB' 'Inactive: 228300 kB' 'Active(anon): 3685696 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921660 kB' 'Mapped: 31324 kB' 'AnonPages: 122088 kB' 'Shmem: 3566840 kB' 'KernelStack: 11304 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123412 kB' 'Slab: 404232 kB' 'SReclaimable: 123412 kB' 'SUnreclaim: 280820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.130 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.131 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.132 node0=1024 expecting 1024 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.132 12:33:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.428 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.428 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.428 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.428 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71763312 kB' 'MemAvailable: 75227056 kB' 'Buffers: 2704 kB' 'Cached: 14477792 kB' 'SwapCached: 0 kB' 'Active: 11642580 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190268 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694332 kB' 'Mapped: 201836 kB' 'Shmem: 10499224 kB' 'KReclaimable: 271736 kB' 'Slab: 909252 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637516 kB' 'KernelStack: 22688 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220168 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.428 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.429 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71763388 kB' 'MemAvailable: 75227132 kB' 'Buffers: 2704 kB' 'Cached: 14477792 kB' 'SwapCached: 0 kB' 'Active: 11643004 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190692 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694776 kB' 'Mapped: 201824 kB' 'Shmem: 10499224 kB' 'KReclaimable: 271736 kB' 'Slab: 909308 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637572 kB' 'KernelStack: 22720 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220168 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.430 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71763136 kB' 'MemAvailable: 75226880 kB' 'Buffers: 2704 kB' 'Cached: 14477812 kB' 'SwapCached: 0 kB' 'Active: 11643032 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190720 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694776 kB' 'Mapped: 201824 kB' 'Shmem: 10499244 kB' 'KReclaimable: 271736 kB' 'Slab: 909308 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637572 kB' 'KernelStack: 22720 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220168 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.431 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.432 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.433 nr_hugepages=1024 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.433 resv_hugepages=0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.433 surplus_hugepages=0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.433 anon_hugepages=0 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71762632 kB' 'MemAvailable: 75226376 kB' 'Buffers: 2704 kB' 'Cached: 14477852 kB' 'SwapCached: 0 kB' 'Active: 11642708 kB' 'Inactive: 3528960 kB' 'Active(anon): 11190396 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3528960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694388 kB' 'Mapped: 201824 kB' 'Shmem: 10499284 kB' 'KReclaimable: 271736 kB' 'Slab: 909308 kB' 'SReclaimable: 271736 kB' 'SUnreclaim: 637572 kB' 'KernelStack: 22704 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12629396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220168 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3687380 kB' 'DirectMap2M: 30595072 kB' 'DirectMap1G: 67108864 kB' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.433 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.434 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40908292 kB' 'MemUsed: 7160104 kB' 'SwapCached: 0 kB' 'Active: 3805988 kB' 'Inactive: 228300 kB' 'Active(anon): 3679468 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 228300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3921692 kB' 'Mapped: 31320 kB' 'AnonPages: 115784 kB' 'Shmem: 3566872 kB' 'KernelStack: 11336 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123412 kB' 'Slab: 404096 kB' 'SReclaimable: 123412 kB' 'SUnreclaim: 280684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.435 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.436 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.437 node0=1024 expecting 1024 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.437 00:03:34.437 real 0m6.144s 00:03:34.437 user 0m2.441s 00:03:34.437 sys 0m3.767s 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.437 12:33:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.437 ************************************ 00:03:34.437 END TEST no_shrink_alloc 00:03:34.437 ************************************ 00:03:34.437 12:33:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.437 12:33:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.437 00:03:34.437 real 0m23.281s 00:03:34.437 user 0m9.051s 00:03:34.437 sys 0m13.738s 00:03:34.437 12:33:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.437 12:33:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.437 ************************************ 00:03:34.437 END TEST hugepages 00:03:34.437 ************************************ 00:03:34.437 12:33:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:34.437 12:33:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.437 12:33:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.437 12:33:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.437 12:33:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.437 ************************************ 00:03:34.437 START TEST driver 00:03:34.437 ************************************ 00:03:34.437 12:33:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.437 * Looking for test storage... 00:03:34.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:34.437 12:33:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.437 12:33:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.437 12:33:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.684 12:33:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:38.684 12:33:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.684 12:33:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.684 12:33:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.684 ************************************ 00:03:38.684 START TEST guess_driver 00:03:38.684 ************************************ 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:38.684 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:38.684 Looking for driver=vfio-pci 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.684 12:33:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.971 12:33:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.538 12:33:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.539 12:33:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.728 00:03:46.728 real 0m8.118s 00:03:46.728 user 0m2.321s 00:03:46.728 sys 0m4.170s 00:03:46.728 12:33:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.728 12:33:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.728 ************************************ 00:03:46.728 END TEST guess_driver 00:03:46.728 ************************************ 00:03:46.728 12:33:38 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:46.728 00:03:46.728 real 0m12.476s 00:03:46.728 user 0m3.519s 00:03:46.728 sys 0m6.505s 00:03:46.728 12:33:38 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.728 12:33:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.728 ************************************ 00:03:46.728 END TEST driver 00:03:46.728 ************************************ 00:03:46.728 12:33:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.728 12:33:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.728 12:33:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.728 12:33:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.728 12:33:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.728 ************************************ 00:03:46.728 START TEST devices 00:03:46.728 ************************************ 00:03:46.728 12:33:38 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.994 * Looking for test storage... 00:03:46.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.994 12:33:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.995 12:33:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.995 12:33:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.995 12:33:38 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.284 12:33:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:50.284 12:33:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.284 12:33:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.284 12:33:41 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.284 No valid GPT data, bailing 00:03:50.284 12:33:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.284 12:33:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.284 12:33:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.284 12:33:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.284 12:33:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.284 12:33:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.284 12:33:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.284 12:33:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.284 12:33:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.284 12:33:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.284 ************************************ 00:03:50.284 START TEST nvme_mount 00:03:50.284 ************************************ 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.284 12:33:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.220 Creating new GPT entries in memory. 00:03:51.220 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.220 other utilities. 00:03:51.220 12:33:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.220 12:33:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.220 12:33:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.220 12:33:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.220 12:33:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.157 Creating new GPT entries in memory. 00:03:52.157 The operation has completed successfully. 00:03:52.157 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.157 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.157 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3714003 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.416 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.417 12:33:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.949 12:33:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.208 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.209 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.209 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.468 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.468 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.468 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.468 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:55.468 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.727 12:33:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.262 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.521 12:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.056 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:01.057 12:33:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.316 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.316 00:04:01.316 real 0m11.080s 00:04:01.316 user 0m3.237s 00:04:01.316 sys 0m5.668s 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.316 12:33:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.316 ************************************ 00:04:01.316 END TEST nvme_mount 00:04:01.316 ************************************ 00:04:01.316 12:33:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:01.316 12:33:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.316 12:33:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.316 12:33:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.316 12:33:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.316 ************************************ 00:04:01.316 START TEST dm_mount 00:04:01.316 ************************************ 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.316 12:33:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:02.691 Creating new GPT entries in memory. 00:04:02.691 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.691 other utilities. 00:04:02.691 12:33:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.691 12:33:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.691 12:33:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.691 12:33:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.691 12:33:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.625 Creating new GPT entries in memory. 00:04:03.625 The operation has completed successfully. 00:04:03.625 12:33:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.625 12:33:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.625 12:33:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.625 12:33:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.625 12:33:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:04.562 The operation has completed successfully. 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3718199 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.562 12:33:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:07.852 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.853 12:33:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.386 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.387 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.387 00:04:10.387 real 0m8.790s 00:04:10.387 user 0m2.021s 00:04:10.387 sys 0m3.769s 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.387 12:34:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 ************************************ 00:04:10.387 END TEST dm_mount 00:04:10.387 ************************************ 00:04:10.387 12:34:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.387 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.387 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.387 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.387 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.387 12:34:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.387 00:04:10.387 real 0m23.662s 00:04:10.387 user 0m6.573s 00:04:10.387 sys 0m11.783s 00:04:10.387 12:34:02 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.387 12:34:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.387 ************************************ 00:04:10.387 END TEST devices 00:04:10.387 ************************************ 00:04:10.646 12:34:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.646 00:04:10.646 real 1m20.396s 00:04:10.646 user 0m26.036s 00:04:10.646 sys 0m44.668s 00:04:10.646 12:34:02 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.646 12:34:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.646 ************************************ 00:04:10.646 END TEST setup.sh 00:04:10.646 ************************************ 00:04:10.646 12:34:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.646 12:34:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.180 Hugepages 00:04:13.180 node hugesize free / total 00:04:13.180 node0 1048576kB 0 / 0 00:04:13.180 node0 2048kB 2048 / 2048 00:04:13.180 node1 1048576kB 0 / 0 00:04:13.180 node1 2048kB 0 / 0 00:04:13.180 00:04:13.180 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.180 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.180 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.439 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:13.439 12:34:05 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.439 12:34:05 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.439 12:34:05 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.439 12:34:05 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.975 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:15.975 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.234 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.171 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.171 12:34:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:18.106 12:34:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:18.106 12:34:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:18.106 12:34:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.106 12:34:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:18.106 12:34:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.106 12:34:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.106 12:34:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.106 12:34:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.106 12:34:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.365 12:34:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:18.365 12:34:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:18.365 12:34:10 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.898 Waiting for block devices as requested 00:04:20.898 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.157 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.157 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.424 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.424 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.424 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:21.424 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:21.685 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:21.685 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:21.685 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.979 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.979 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:22.277 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.277 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.277 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.535 12:34:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.535 12:34:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:22.535 12:34:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1502 -- # grep 0000:86:00.0/nvme/nvme 00:04:22.536 12:34:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:22.536 12:34:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:22.536 12:34:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:22.536 12:34:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:22.536 12:34:14 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:22.536 12:34:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:22.536 12:34:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:22.536 12:34:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:22.536 12:34:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:22.536 12:34:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:22.536 12:34:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:22.536 12:34:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:22.536 12:34:14 -- common/autotest_common.sh@1557 -- # continue 00:04:22.536 12:34:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:22.536 12:34:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.536 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:04:22.536 12:34:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:22.536 12:34:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.536 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:04:22.536 12:34:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.069 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.069 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.328 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.275 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.275 12:34:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:26.275 12:34:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.275 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.275 12:34:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:26.275 12:34:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:26.275 12:34:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.275 12:34:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:26.275 12:34:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:26.275 12:34:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:26.275 12:34:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:26.275 12:34:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:26.275 12:34:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.275 12:34:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.275 12:34:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:26.275 12:34:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:26.275 12:34:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:26.275 12:34:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:26.275 12:34:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:26.275 12:34:18 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:26.534 12:34:18 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:26.534 12:34:18 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:26.534 12:34:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:86:00.0 00:04:26.534 12:34:18 -- common/autotest_common.sh@1592 -- # [[ -z 0000:86:00.0 ]] 00:04:26.534 12:34:18 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3727515 00:04:26.534 12:34:18 -- common/autotest_common.sh@1598 -- # waitforlisten 3727515 00:04:26.534 12:34:18 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.534 12:34:18 -- common/autotest_common.sh@829 -- # '[' -z 3727515 ']' 00:04:26.534 12:34:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.534 12:34:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.534 12:34:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.534 12:34:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.534 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.534 [2024-07-15 12:34:18.317483] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:26.534 [2024-07-15 12:34:18.317596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727515 ] 00:04:26.534 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.534 [2024-07-15 12:34:18.437401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.793 [2024-07-15 12:34:18.539125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.051 12:34:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.051 12:34:18 -- common/autotest_common.sh@862 -- # return 0 00:04:27.051 12:34:18 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:27.051 12:34:18 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:27.051 12:34:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:30.339 nvme0n1 00:04:30.339 12:34:21 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.339 [2024-07-15 12:34:22.138777] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:30.339 request: 00:04:30.339 { 00:04:30.339 "nvme_ctrlr_name": "nvme0", 00:04:30.339 "password": "test", 00:04:30.339 "method": "bdev_nvme_opal_revert", 00:04:30.339 "req_id": 1 00:04:30.339 } 00:04:30.339 Got JSON-RPC error response 00:04:30.339 response: 00:04:30.339 { 00:04:30.339 "code": -32602, 00:04:30.339 "message": "Invalid parameters" 00:04:30.339 } 00:04:30.339 12:34:22 -- common/autotest_common.sh@1604 -- # true 00:04:30.339 12:34:22 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:30.339 12:34:22 -- common/autotest_common.sh@1608 -- # killprocess 3727515 00:04:30.339 12:34:22 -- common/autotest_common.sh@948 -- # '[' -z 3727515 ']' 00:04:30.339 12:34:22 -- common/autotest_common.sh@952 -- # kill -0 3727515 00:04:30.339 12:34:22 -- common/autotest_common.sh@953 -- # uname 00:04:30.339 12:34:22 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.339 12:34:22 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727515 00:04:30.339 12:34:22 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.339 12:34:22 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.339 12:34:22 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727515' 00:04:30.339 killing process with pid 3727515 00:04:30.339 12:34:22 -- common/autotest_common.sh@967 -- # kill 3727515 00:04:30.339 12:34:22 -- common/autotest_common.sh@972 -- # wait 3727515 00:04:32.240 12:34:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:32.240 12:34:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:32.240 12:34:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.240 12:34:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.240 12:34:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:32.240 12:34:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.240 12:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.240 12:34:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:32.240 12:34:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.240 12:34:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.240 12:34:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.240 12:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.240 ************************************ 00:04:32.240 START TEST env 00:04:32.240 ************************************ 00:04:32.240 12:34:23 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.240 * Looking for test storage... 00:04:32.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:32.240 12:34:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.240 12:34:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.240 12:34:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.240 12:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.240 ************************************ 00:04:32.240 START TEST env_memory 00:04:32.240 ************************************ 00:04:32.240 12:34:24 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.240 00:04:32.240 00:04:32.240 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.240 http://cunit.sourceforge.net/ 00:04:32.240 00:04:32.240 00:04:32.240 Suite: memory 00:04:32.240 Test: alloc and free memory map ...[2024-07-15 12:34:24.131696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.240 passed 00:04:32.240 Test: mem map translation ...[2024-07-15 12:34:24.160875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.240 [2024-07-15 12:34:24.160895] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.240 [2024-07-15 12:34:24.160950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.240 [2024-07-15 12:34:24.160959] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.498 passed 00:04:32.498 Test: mem map registration ...[2024-07-15 12:34:24.220752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:32.498 [2024-07-15 12:34:24.220775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:32.498 passed 00:04:32.498 Test: mem map adjacent registrations ...passed 00:04:32.498 00:04:32.498 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.498 suites 1 1 n/a 0 0 00:04:32.498 tests 4 4 4 0 0 00:04:32.498 asserts 152 152 152 0 n/a 00:04:32.498 00:04:32.498 Elapsed time = 0.203 seconds 00:04:32.498 00:04:32.498 real 0m0.217s 00:04:32.498 user 0m0.207s 00:04:32.498 sys 0m0.009s 00:04:32.498 12:34:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.498 12:34:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.498 ************************************ 00:04:32.498 END TEST env_memory 00:04:32.498 ************************************ 00:04:32.498 12:34:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.498 12:34:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.498 12:34:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.498 12:34:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.498 12:34:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.498 ************************************ 00:04:32.498 START TEST env_vtophys 00:04:32.498 ************************************ 00:04:32.498 12:34:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.498 EAL: lib.eal log level changed from notice to debug 00:04:32.498 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.498 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.498 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.498 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.498 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.498 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.498 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.498 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.498 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.498 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.498 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.498 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.498 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.498 EAL: Detected lcore 13 as core 14 on socket 0 00:04:32.498 EAL: Detected lcore 14 as core 16 on socket 0 00:04:32.498 EAL: Detected lcore 15 as core 17 on socket 0 00:04:32.498 EAL: Detected lcore 16 as core 18 on socket 0 00:04:32.498 EAL: Detected lcore 17 as core 19 on socket 0 00:04:32.498 EAL: Detected lcore 18 as core 20 on socket 0 00:04:32.498 EAL: Detected lcore 19 as core 21 on socket 0 00:04:32.499 EAL: Detected lcore 20 as core 22 on socket 0 00:04:32.499 EAL: Detected lcore 21 as core 24 on socket 0 00:04:32.499 EAL: Detected lcore 22 as core 25 on socket 0 00:04:32.499 EAL: Detected lcore 23 as core 26 on socket 0 00:04:32.499 EAL: Detected lcore 24 as core 27 on socket 0 00:04:32.499 EAL: Detected lcore 25 as core 28 on socket 0 00:04:32.499 EAL: Detected lcore 26 as core 29 on socket 0 00:04:32.499 EAL: Detected lcore 27 as core 30 on socket 0 00:04:32.499 EAL: Detected lcore 28 as core 0 on socket 1 00:04:32.499 EAL: Detected lcore 29 as core 1 on socket 1 00:04:32.499 EAL: Detected lcore 30 as core 2 on socket 1 00:04:32.499 EAL: Detected lcore 31 as core 3 on socket 1 00:04:32.499 EAL: Detected lcore 32 as core 4 on socket 1 00:04:32.499 EAL: Detected lcore 33 as core 5 on socket 1 00:04:32.499 EAL: Detected lcore 34 as core 6 on socket 1 00:04:32.499 EAL: Detected lcore 35 as core 8 on socket 1 00:04:32.499 EAL: Detected lcore 36 as core 9 on socket 1 00:04:32.499 EAL: Detected lcore 37 as core 10 on socket 1 00:04:32.499 EAL: Detected lcore 38 as core 11 on socket 1 00:04:32.499 EAL: Detected lcore 39 as core 12 on socket 1 00:04:32.499 EAL: Detected lcore 40 as core 13 on socket 1 00:04:32.499 EAL: Detected lcore 41 as core 14 on socket 1 00:04:32.499 EAL: Detected lcore 42 as core 16 on socket 1 00:04:32.499 EAL: Detected lcore 43 as core 17 on socket 1 00:04:32.499 EAL: Detected lcore 44 as core 18 on socket 1 00:04:32.499 EAL: Detected lcore 45 as core 19 on socket 1 00:04:32.499 EAL: Detected lcore 46 as core 20 on socket 1 00:04:32.499 EAL: Detected lcore 47 as core 21 on socket 1 00:04:32.499 EAL: Detected lcore 48 as core 22 on socket 1 00:04:32.499 EAL: Detected lcore 49 as core 24 on socket 1 00:04:32.499 EAL: Detected lcore 50 as core 25 on socket 1 00:04:32.499 EAL: Detected lcore 51 as core 26 on socket 1 00:04:32.499 EAL: Detected lcore 52 as core 27 on socket 1 00:04:32.499 EAL: Detected lcore 53 as core 28 on socket 1 00:04:32.499 EAL: Detected lcore 54 as core 29 on socket 1 00:04:32.499 EAL: Detected lcore 55 as core 30 on socket 1 00:04:32.499 EAL: Detected lcore 56 as core 0 on socket 0 00:04:32.499 EAL: Detected lcore 57 as core 1 on socket 0 00:04:32.499 EAL: Detected lcore 58 as core 2 on socket 0 00:04:32.499 EAL: Detected lcore 59 as core 3 on socket 0 00:04:32.499 EAL: Detected lcore 60 as core 4 on socket 0 00:04:32.499 EAL: Detected lcore 61 as core 5 on socket 0 00:04:32.499 EAL: Detected lcore 62 as core 6 on socket 0 00:04:32.499 EAL: Detected lcore 63 as core 8 on socket 0 00:04:32.499 EAL: Detected lcore 64 as core 9 on socket 0 00:04:32.499 EAL: Detected lcore 65 as core 10 on socket 0 00:04:32.499 EAL: Detected lcore 66 as core 11 on socket 0 00:04:32.499 EAL: Detected lcore 67 as core 12 on socket 0 00:04:32.499 EAL: Detected lcore 68 as core 13 on socket 0 00:04:32.499 EAL: Detected lcore 69 as core 14 on socket 0 00:04:32.499 EAL: Detected lcore 70 as core 16 on socket 0 00:04:32.499 EAL: Detected lcore 71 as core 17 on socket 0 00:04:32.499 EAL: Detected lcore 72 as core 18 on socket 0 00:04:32.499 EAL: Detected lcore 73 as core 19 on socket 0 00:04:32.499 EAL: Detected lcore 74 as core 20 on socket 0 00:04:32.499 EAL: Detected lcore 75 as core 21 on socket 0 00:04:32.499 EAL: Detected lcore 76 as core 22 on socket 0 00:04:32.499 EAL: Detected lcore 77 as core 24 on socket 0 00:04:32.499 EAL: Detected lcore 78 as core 25 on socket 0 00:04:32.499 EAL: Detected lcore 79 as core 26 on socket 0 00:04:32.499 EAL: Detected lcore 80 as core 27 on socket 0 00:04:32.499 EAL: Detected lcore 81 as core 28 on socket 0 00:04:32.499 EAL: Detected lcore 82 as core 29 on socket 0 00:04:32.499 EAL: Detected lcore 83 as core 30 on socket 0 00:04:32.499 EAL: Detected lcore 84 as core 0 on socket 1 00:04:32.499 EAL: Detected lcore 85 as core 1 on socket 1 00:04:32.499 EAL: Detected lcore 86 as core 2 on socket 1 00:04:32.499 EAL: Detected lcore 87 as core 3 on socket 1 00:04:32.499 EAL: Detected lcore 88 as core 4 on socket 1 00:04:32.499 EAL: Detected lcore 89 as core 5 on socket 1 00:04:32.499 EAL: Detected lcore 90 as core 6 on socket 1 00:04:32.499 EAL: Detected lcore 91 as core 8 on socket 1 00:04:32.499 EAL: Detected lcore 92 as core 9 on socket 1 00:04:32.499 EAL: Detected lcore 93 as core 10 on socket 1 00:04:32.499 EAL: Detected lcore 94 as core 11 on socket 1 00:04:32.499 EAL: Detected lcore 95 as core 12 on socket 1 00:04:32.499 EAL: Detected lcore 96 as core 13 on socket 1 00:04:32.499 EAL: Detected lcore 97 as core 14 on socket 1 00:04:32.499 EAL: Detected lcore 98 as core 16 on socket 1 00:04:32.499 EAL: Detected lcore 99 as core 17 on socket 1 00:04:32.499 EAL: Detected lcore 100 as core 18 on socket 1 00:04:32.499 EAL: Detected lcore 101 as core 19 on socket 1 00:04:32.499 EAL: Detected lcore 102 as core 20 on socket 1 00:04:32.499 EAL: Detected lcore 103 as core 21 on socket 1 00:04:32.499 EAL: Detected lcore 104 as core 22 on socket 1 00:04:32.499 EAL: Detected lcore 105 as core 24 on socket 1 00:04:32.499 EAL: Detected lcore 106 as core 25 on socket 1 00:04:32.499 EAL: Detected lcore 107 as core 26 on socket 1 00:04:32.499 EAL: Detected lcore 108 as core 27 on socket 1 00:04:32.499 EAL: Detected lcore 109 as core 28 on socket 1 00:04:32.499 EAL: Detected lcore 110 as core 29 on socket 1 00:04:32.499 EAL: Detected lcore 111 as core 30 on socket 1 00:04:32.499 EAL: Maximum logical cores by configuration: 128 00:04:32.499 EAL: Detected CPU lcores: 112 00:04:32.499 EAL: Detected NUMA nodes: 2 00:04:32.499 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.499 EAL: Detected shared linkage of DPDK 00:04:32.499 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.499 EAL: Bus pci wants IOVA as 'DC' 00:04:32.499 EAL: Buses did not request a specific IOVA mode. 00:04:32.499 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.499 EAL: Selected IOVA mode 'VA' 00:04:32.499 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.499 EAL: Probing VFIO support... 00:04:32.499 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.499 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.499 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.499 EAL: VFIO support initialized 00:04:32.499 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.499 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.499 EAL: Setting up physically contiguous memory... 00:04:32.499 EAL: Setting maximum number of open files to 524288 00:04:32.499 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.499 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.499 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.499 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.499 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.499 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.499 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.499 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.499 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.499 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.499 EAL: Hugepages will be freed exactly as allocated. 00:04:32.499 EAL: No shared files mode enabled, IPC is disabled 00:04:32.499 EAL: No shared files mode enabled, IPC is disabled 00:04:32.499 EAL: TSC frequency is ~2200000 KHz 00:04:32.499 EAL: Main lcore 0 is ready (tid=7f6add8b2a00;cpuset=[0]) 00:04:32.499 EAL: Trying to obtain current memory policy. 00:04:32.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.499 EAL: Restoring previous memory policy: 0 00:04:32.499 EAL: request: mp_malloc_sync 00:04:32.499 EAL: No shared files mode enabled, IPC is disabled 00:04:32.499 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.499 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.758 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.758 00:04:32.758 00:04:32.758 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.758 http://cunit.sourceforge.net/ 00:04:32.758 00:04:32.758 00:04:32.758 Suite: components_suite 00:04:32.758 Test: vtophys_malloc_test ...passed 00:04:32.758 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.758 EAL: Trying to obtain current memory policy. 00:04:32.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.758 EAL: Restoring previous memory policy: 4 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.758 EAL: request: mp_malloc_sync 00:04:32.758 EAL: No shared files mode enabled, IPC is disabled 00:04:32.758 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.016 EAL: request: mp_malloc_sync 00:04:33.016 EAL: No shared files mode enabled, IPC is disabled 00:04:33.016 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.016 EAL: Trying to obtain current memory policy. 00:04:33.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.016 EAL: Restoring previous memory policy: 4 00:04:33.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.016 EAL: request: mp_malloc_sync 00:04:33.016 EAL: No shared files mode enabled, IPC is disabled 00:04:33.016 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.275 EAL: request: mp_malloc_sync 00:04:33.275 EAL: No shared files mode enabled, IPC is disabled 00:04:33.275 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.275 EAL: Trying to obtain current memory policy. 00:04:33.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.275 EAL: Restoring previous memory policy: 4 00:04:33.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.275 EAL: request: mp_malloc_sync 00:04:33.275 EAL: No shared files mode enabled, IPC is disabled 00:04:33.275 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.791 EAL: request: mp_malloc_sync 00:04:33.791 EAL: No shared files mode enabled, IPC is disabled 00:04:33.791 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.791 passed 00:04:33.791 00:04:33.791 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.791 suites 1 1 n/a 0 0 00:04:33.791 tests 2 2 2 0 0 00:04:33.791 asserts 497 497 497 0 n/a 00:04:33.791 00:04:33.791 Elapsed time = 1.022 seconds 00:04:33.791 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.791 EAL: request: mp_malloc_sync 00:04:33.791 EAL: No shared files mode enabled, IPC is disabled 00:04:33.791 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.791 EAL: No shared files mode enabled, IPC is disabled 00:04:33.791 EAL: No shared files mode enabled, IPC is disabled 00:04:33.791 EAL: No shared files mode enabled, IPC is disabled 00:04:33.791 00:04:33.791 real 0m1.149s 00:04:33.791 user 0m0.675s 00:04:33.791 sys 0m0.445s 00:04:33.791 12:34:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.791 12:34:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 END TEST env_vtophys 00:04:33.791 ************************************ 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.791 12:34:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.791 12:34:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 START TEST env_pci 00:04:33.791 ************************************ 00:04:33.791 12:34:25 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.791 00:04:33.791 00:04:33.791 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.791 http://cunit.sourceforge.net/ 00:04:33.791 00:04:33.791 00:04:33.791 Suite: pci 00:04:33.791 Test: pci_hook ...[2024-07-15 12:34:25.601896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3728999 has claimed it 00:04:33.791 EAL: Cannot find device (10000:00:01.0) 00:04:33.791 EAL: Failed to attach device on primary process 00:04:33.791 passed 00:04:33.791 00:04:33.791 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.791 suites 1 1 n/a 0 0 00:04:33.791 tests 1 1 1 0 0 00:04:33.791 asserts 25 25 25 0 n/a 00:04:33.791 00:04:33.791 Elapsed time = 0.028 seconds 00:04:33.791 00:04:33.791 real 0m0.047s 00:04:33.791 user 0m0.015s 00:04:33.791 sys 0m0.032s 00:04:33.791 12:34:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.791 12:34:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 END TEST env_pci 00:04:33.791 ************************************ 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.791 12:34:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.791 12:34:25 env -- env/env.sh@15 -- # uname 00:04:33.791 12:34:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.791 12:34:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.791 12:34:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:33.791 12:34:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.791 12:34:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.791 ************************************ 00:04:33.791 START TEST env_dpdk_post_init 00:04:33.791 ************************************ 00:04:33.791 12:34:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.050 EAL: Detected CPU lcores: 112 00:04:34.050 EAL: Detected NUMA nodes: 2 00:04:34.050 EAL: Detected shared linkage of DPDK 00:04:34.050 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.050 EAL: Selected IOVA mode 'VA' 00:04:34.050 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.050 EAL: VFIO support initialized 00:04:34.050 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.050 EAL: Using IOMMU type 1 (Type 1) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:34.050 EAL: Ignore mapping IO port bar(1) 00:04:34.050 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:34.051 EAL: Ignore mapping IO port bar(1) 00:04:34.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:34.051 EAL: Ignore mapping IO port bar(1) 00:04:34.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:34.310 EAL: Ignore mapping IO port bar(1) 00:04:34.310 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:34.310 EAL: Ignore mapping IO port bar(1) 00:04:34.310 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:34.310 EAL: Ignore mapping IO port bar(1) 00:04:34.310 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:34.310 EAL: Ignore mapping IO port bar(1) 00:04:34.310 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:34.310 EAL: Ignore mapping IO port bar(1) 00:04:34.310 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:34.877 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:04:38.161 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:04:38.161 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:04:38.420 Starting DPDK initialization... 00:04:38.420 Starting SPDK post initialization... 00:04:38.420 SPDK NVMe probe 00:04:38.420 Attaching to 0000:86:00.0 00:04:38.420 Attached to 0000:86:00.0 00:04:38.420 Cleaning up... 00:04:38.420 00:04:38.420 real 0m4.439s 00:04:38.420 user 0m3.351s 00:04:38.420 sys 0m0.147s 00:04:38.420 12:34:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.420 12:34:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.420 ************************************ 00:04:38.420 END TEST env_dpdk_post_init 00:04:38.420 ************************************ 00:04:38.420 12:34:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.420 12:34:30 env -- env/env.sh@26 -- # uname 00:04:38.420 12:34:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.420 12:34:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.420 12:34:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.420 12:34:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.420 12:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.420 ************************************ 00:04:38.420 START TEST env_mem_callbacks 00:04:38.421 ************************************ 00:04:38.421 12:34:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.421 EAL: Detected CPU lcores: 112 00:04:38.421 EAL: Detected NUMA nodes: 2 00:04:38.421 EAL: Detected shared linkage of DPDK 00:04:38.421 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.421 EAL: Selected IOVA mode 'VA' 00:04:38.421 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.421 EAL: VFIO support initialized 00:04:38.421 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.421 00:04:38.421 00:04:38.421 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.421 http://cunit.sourceforge.net/ 00:04:38.421 00:04:38.421 00:04:38.421 Suite: memory 00:04:38.421 Test: test ... 00:04:38.421 register 0x200000200000 2097152 00:04:38.421 malloc 3145728 00:04:38.421 register 0x200000400000 4194304 00:04:38.421 buf 0x200000500000 len 3145728 PASSED 00:04:38.421 malloc 64 00:04:38.421 buf 0x2000004fff40 len 64 PASSED 00:04:38.421 malloc 4194304 00:04:38.421 register 0x200000800000 6291456 00:04:38.421 buf 0x200000a00000 len 4194304 PASSED 00:04:38.421 free 0x200000500000 3145728 00:04:38.421 free 0x2000004fff40 64 00:04:38.421 unregister 0x200000400000 4194304 PASSED 00:04:38.421 free 0x200000a00000 4194304 00:04:38.421 unregister 0x200000800000 6291456 PASSED 00:04:38.421 malloc 8388608 00:04:38.421 register 0x200000400000 10485760 00:04:38.421 buf 0x200000600000 len 8388608 PASSED 00:04:38.421 free 0x200000600000 8388608 00:04:38.421 unregister 0x200000400000 10485760 PASSED 00:04:38.421 passed 00:04:38.421 00:04:38.421 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.421 suites 1 1 n/a 0 0 00:04:38.421 tests 1 1 1 0 0 00:04:38.421 asserts 15 15 15 0 n/a 00:04:38.421 00:04:38.421 Elapsed time = 0.008 seconds 00:04:38.421 00:04:38.421 real 0m0.060s 00:04:38.421 user 0m0.019s 00:04:38.421 sys 0m0.041s 00:04:38.421 12:34:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.421 12:34:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 END TEST env_mem_callbacks 00:04:38.421 ************************************ 00:04:38.421 12:34:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.421 00:04:38.421 real 0m6.352s 00:04:38.421 user 0m4.455s 00:04:38.421 sys 0m0.960s 00:04:38.421 12:34:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.421 12:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 END TEST env 00:04:38.421 ************************************ 00:04:38.421 12:34:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.421 12:34:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.421 12:34:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.421 12:34:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.421 12:34:30 -- common/autotest_common.sh@10 -- # set +x 00:04:38.679 ************************************ 00:04:38.679 START TEST rpc 00:04:38.679 ************************************ 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.679 * Looking for test storage... 00:04:38.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.679 12:34:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3729932 00:04:38.679 12:34:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.679 12:34:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.679 12:34:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3729932 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 3729932 ']' 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.679 12:34:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.679 [2024-07-15 12:34:30.532920] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:38.679 [2024-07-15 12:34:30.532981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729932 ] 00:04:38.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.679 [2024-07-15 12:34:30.613197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.937 [2024-07-15 12:34:30.704610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.937 [2024-07-15 12:34:30.704653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3729932' to capture a snapshot of events at runtime. 00:04:38.937 [2024-07-15 12:34:30.704664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.937 [2024-07-15 12:34:30.704672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.937 [2024-07-15 12:34:30.704679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3729932 for offline analysis/debug. 00:04:38.937 [2024-07-15 12:34:30.704704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.869 12:34:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.869 12:34:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.869 12:34:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.869 12:34:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.869 12:34:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.869 12:34:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.869 12:34:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.869 12:34:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.869 12:34:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.869 ************************************ 00:04:39.869 START TEST rpc_integrity 00:04:39.869 ************************************ 00:04:39.869 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.869 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.869 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.869 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.869 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.869 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.869 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.870 { 00:04:39.870 "name": "Malloc0", 00:04:39.870 "aliases": [ 00:04:39.870 "c71f11ce-ab99-44e2-aa4c-72d15285370b" 00:04:39.870 ], 00:04:39.870 "product_name": "Malloc disk", 00:04:39.870 "block_size": 512, 00:04:39.870 "num_blocks": 16384, 00:04:39.870 "uuid": "c71f11ce-ab99-44e2-aa4c-72d15285370b", 00:04:39.870 "assigned_rate_limits": { 00:04:39.870 "rw_ios_per_sec": 0, 00:04:39.870 "rw_mbytes_per_sec": 0, 00:04:39.870 "r_mbytes_per_sec": 0, 00:04:39.870 "w_mbytes_per_sec": 0 00:04:39.870 }, 00:04:39.870 "claimed": false, 00:04:39.870 "zoned": false, 00:04:39.870 "supported_io_types": { 00:04:39.870 "read": true, 00:04:39.870 "write": true, 00:04:39.870 "unmap": true, 00:04:39.870 "flush": true, 00:04:39.870 "reset": true, 00:04:39.870 "nvme_admin": false, 00:04:39.870 "nvme_io": false, 00:04:39.870 "nvme_io_md": false, 00:04:39.870 "write_zeroes": true, 00:04:39.870 "zcopy": true, 00:04:39.870 "get_zone_info": false, 00:04:39.870 "zone_management": false, 00:04:39.870 "zone_append": false, 00:04:39.870 "compare": false, 00:04:39.870 "compare_and_write": false, 00:04:39.870 "abort": true, 00:04:39.870 "seek_hole": false, 00:04:39.870 "seek_data": false, 00:04:39.870 "copy": true, 00:04:39.870 "nvme_iov_md": false 00:04:39.870 }, 00:04:39.870 "memory_domains": [ 00:04:39.870 { 00:04:39.870 "dma_device_id": "system", 00:04:39.870 "dma_device_type": 1 00:04:39.870 }, 00:04:39.870 { 00:04:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.870 "dma_device_type": 2 00:04:39.870 } 00:04:39.870 ], 00:04:39.870 "driver_specific": {} 00:04:39.870 } 00:04:39.870 ]' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 [2024-07-15 12:34:31.626570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.870 [2024-07-15 12:34:31.626605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.870 [2024-07-15 12:34:31.626621] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb4c80 00:04:39.870 [2024-07-15 12:34:31.626630] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.870 [2024-07-15 12:34:31.628130] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.870 [2024-07-15 12:34:31.628156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.870 Passthru0 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.870 { 00:04:39.870 "name": "Malloc0", 00:04:39.870 "aliases": [ 00:04:39.870 "c71f11ce-ab99-44e2-aa4c-72d15285370b" 00:04:39.870 ], 00:04:39.870 "product_name": "Malloc disk", 00:04:39.870 "block_size": 512, 00:04:39.870 "num_blocks": 16384, 00:04:39.870 "uuid": "c71f11ce-ab99-44e2-aa4c-72d15285370b", 00:04:39.870 "assigned_rate_limits": { 00:04:39.870 "rw_ios_per_sec": 0, 00:04:39.870 "rw_mbytes_per_sec": 0, 00:04:39.870 "r_mbytes_per_sec": 0, 00:04:39.870 "w_mbytes_per_sec": 0 00:04:39.870 }, 00:04:39.870 "claimed": true, 00:04:39.870 "claim_type": "exclusive_write", 00:04:39.870 "zoned": false, 00:04:39.870 "supported_io_types": { 00:04:39.870 "read": true, 00:04:39.870 "write": true, 00:04:39.870 "unmap": true, 00:04:39.870 "flush": true, 00:04:39.870 "reset": true, 00:04:39.870 "nvme_admin": false, 00:04:39.870 "nvme_io": false, 00:04:39.870 "nvme_io_md": false, 00:04:39.870 "write_zeroes": true, 00:04:39.870 "zcopy": true, 00:04:39.870 "get_zone_info": false, 00:04:39.870 "zone_management": false, 00:04:39.870 "zone_append": false, 00:04:39.870 "compare": false, 00:04:39.870 "compare_and_write": false, 00:04:39.870 "abort": true, 00:04:39.870 "seek_hole": false, 00:04:39.870 "seek_data": false, 00:04:39.870 "copy": true, 00:04:39.870 "nvme_iov_md": false 00:04:39.870 }, 00:04:39.870 "memory_domains": [ 00:04:39.870 { 00:04:39.870 "dma_device_id": "system", 00:04:39.870 "dma_device_type": 1 00:04:39.870 }, 00:04:39.870 { 00:04:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.870 "dma_device_type": 2 00:04:39.870 } 00:04:39.870 ], 00:04:39.870 "driver_specific": {} 00:04:39.870 }, 00:04:39.870 { 00:04:39.870 "name": "Passthru0", 00:04:39.870 "aliases": [ 00:04:39.870 "689f9f3b-f993-5874-a8c1-deb45c653bfc" 00:04:39.870 ], 00:04:39.870 "product_name": "passthru", 00:04:39.870 "block_size": 512, 00:04:39.870 "num_blocks": 16384, 00:04:39.870 "uuid": "689f9f3b-f993-5874-a8c1-deb45c653bfc", 00:04:39.870 "assigned_rate_limits": { 00:04:39.870 "rw_ios_per_sec": 0, 00:04:39.870 "rw_mbytes_per_sec": 0, 00:04:39.870 "r_mbytes_per_sec": 0, 00:04:39.870 "w_mbytes_per_sec": 0 00:04:39.870 }, 00:04:39.870 "claimed": false, 00:04:39.870 "zoned": false, 00:04:39.870 "supported_io_types": { 00:04:39.870 "read": true, 00:04:39.870 "write": true, 00:04:39.870 "unmap": true, 00:04:39.870 "flush": true, 00:04:39.870 "reset": true, 00:04:39.870 "nvme_admin": false, 00:04:39.870 "nvme_io": false, 00:04:39.870 "nvme_io_md": false, 00:04:39.870 "write_zeroes": true, 00:04:39.870 "zcopy": true, 00:04:39.870 "get_zone_info": false, 00:04:39.870 "zone_management": false, 00:04:39.870 "zone_append": false, 00:04:39.870 "compare": false, 00:04:39.870 "compare_and_write": false, 00:04:39.870 "abort": true, 00:04:39.870 "seek_hole": false, 00:04:39.870 "seek_data": false, 00:04:39.870 "copy": true, 00:04:39.870 "nvme_iov_md": false 00:04:39.870 }, 00:04:39.870 "memory_domains": [ 00:04:39.870 { 00:04:39.870 "dma_device_id": "system", 00:04:39.870 "dma_device_type": 1 00:04:39.870 }, 00:04:39.870 { 00:04:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.870 "dma_device_type": 2 00:04:39.870 } 00:04:39.870 ], 00:04:39.870 "driver_specific": { 00:04:39.870 "passthru": { 00:04:39.870 "name": "Passthru0", 00:04:39.870 "base_bdev_name": "Malloc0" 00:04:39.870 } 00:04:39.870 } 00:04:39.870 } 00:04:39.870 ]' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.870 12:34:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.870 00:04:39.870 real 0m0.290s 00:04:39.870 user 0m0.197s 00:04:39.870 sys 0m0.027s 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.870 12:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.870 ************************************ 00:04:39.870 END TEST rpc_integrity 00:04:39.870 ************************************ 00:04:40.129 12:34:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.129 12:34:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.129 12:34:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.129 12:34:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.129 12:34:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 ************************************ 00:04:40.129 START TEST rpc_plugins 00:04:40.129 ************************************ 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.129 { 00:04:40.129 "name": "Malloc1", 00:04:40.129 "aliases": [ 00:04:40.129 "26518c9b-dc07-4f38-b448-7755568521ec" 00:04:40.129 ], 00:04:40.129 "product_name": "Malloc disk", 00:04:40.129 "block_size": 4096, 00:04:40.129 "num_blocks": 256, 00:04:40.129 "uuid": "26518c9b-dc07-4f38-b448-7755568521ec", 00:04:40.129 "assigned_rate_limits": { 00:04:40.129 "rw_ios_per_sec": 0, 00:04:40.129 "rw_mbytes_per_sec": 0, 00:04:40.129 "r_mbytes_per_sec": 0, 00:04:40.129 "w_mbytes_per_sec": 0 00:04:40.129 }, 00:04:40.129 "claimed": false, 00:04:40.129 "zoned": false, 00:04:40.129 "supported_io_types": { 00:04:40.129 "read": true, 00:04:40.129 "write": true, 00:04:40.129 "unmap": true, 00:04:40.129 "flush": true, 00:04:40.129 "reset": true, 00:04:40.129 "nvme_admin": false, 00:04:40.129 "nvme_io": false, 00:04:40.129 "nvme_io_md": false, 00:04:40.129 "write_zeroes": true, 00:04:40.129 "zcopy": true, 00:04:40.129 "get_zone_info": false, 00:04:40.129 "zone_management": false, 00:04:40.129 "zone_append": false, 00:04:40.129 "compare": false, 00:04:40.129 "compare_and_write": false, 00:04:40.129 "abort": true, 00:04:40.129 "seek_hole": false, 00:04:40.129 "seek_data": false, 00:04:40.129 "copy": true, 00:04:40.129 "nvme_iov_md": false 00:04:40.129 }, 00:04:40.129 "memory_domains": [ 00:04:40.129 { 00:04:40.129 "dma_device_id": "system", 00:04:40.129 "dma_device_type": 1 00:04:40.129 }, 00:04:40.129 { 00:04:40.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.129 "dma_device_type": 2 00:04:40.129 } 00:04:40.129 ], 00:04:40.129 "driver_specific": {} 00:04:40.129 } 00:04:40.129 ]' 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.129 12:34:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.129 00:04:40.129 real 0m0.147s 00:04:40.129 user 0m0.093s 00:04:40.129 sys 0m0.019s 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.129 12:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 ************************************ 00:04:40.129 END TEST rpc_plugins 00:04:40.129 ************************************ 00:04:40.129 12:34:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.129 12:34:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.129 12:34:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.129 12:34:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.129 12:34:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.129 ************************************ 00:04:40.129 START TEST rpc_trace_cmd_test 00:04:40.129 ************************************ 00:04:40.129 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:40.129 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.129 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.130 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.130 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.388 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.388 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.388 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3729932", 00:04:40.388 "tpoint_group_mask": "0x8", 00:04:40.388 "iscsi_conn": { 00:04:40.388 "mask": "0x2", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "scsi": { 00:04:40.388 "mask": "0x4", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "bdev": { 00:04:40.388 "mask": "0x8", 00:04:40.388 "tpoint_mask": "0xffffffffffffffff" 00:04:40.388 }, 00:04:40.388 "nvmf_rdma": { 00:04:40.388 "mask": "0x10", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "nvmf_tcp": { 00:04:40.388 "mask": "0x20", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "ftl": { 00:04:40.388 "mask": "0x40", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "blobfs": { 00:04:40.388 "mask": "0x80", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "dsa": { 00:04:40.388 "mask": "0x200", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "thread": { 00:04:40.388 "mask": "0x400", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "nvme_pcie": { 00:04:40.388 "mask": "0x800", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "iaa": { 00:04:40.388 "mask": "0x1000", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "nvme_tcp": { 00:04:40.388 "mask": "0x2000", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "bdev_nvme": { 00:04:40.388 "mask": "0x4000", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 }, 00:04:40.388 "sock": { 00:04:40.388 "mask": "0x8000", 00:04:40.388 "tpoint_mask": "0x0" 00:04:40.388 } 00:04:40.388 }' 00:04:40.388 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.388 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:40.388 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.389 00:04:40.389 real 0m0.247s 00:04:40.389 user 0m0.212s 00:04:40.389 sys 0m0.026s 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.389 12:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.389 ************************************ 00:04:40.389 END TEST rpc_trace_cmd_test 00:04:40.389 ************************************ 00:04:40.648 12:34:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.648 12:34:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.648 12:34:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.648 12:34:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.648 12:34:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.648 12:34:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.648 12:34:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 ************************************ 00:04:40.648 START TEST rpc_daemon_integrity 00:04:40.648 ************************************ 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.648 { 00:04:40.648 "name": "Malloc2", 00:04:40.648 "aliases": [ 00:04:40.648 "04408261-1cb0-42eb-802d-fe6659d93632" 00:04:40.648 ], 00:04:40.648 "product_name": "Malloc disk", 00:04:40.648 "block_size": 512, 00:04:40.648 "num_blocks": 16384, 00:04:40.648 "uuid": "04408261-1cb0-42eb-802d-fe6659d93632", 00:04:40.648 "assigned_rate_limits": { 00:04:40.648 "rw_ios_per_sec": 0, 00:04:40.648 "rw_mbytes_per_sec": 0, 00:04:40.648 "r_mbytes_per_sec": 0, 00:04:40.648 "w_mbytes_per_sec": 0 00:04:40.648 }, 00:04:40.648 "claimed": false, 00:04:40.648 "zoned": false, 00:04:40.648 "supported_io_types": { 00:04:40.648 "read": true, 00:04:40.648 "write": true, 00:04:40.648 "unmap": true, 00:04:40.648 "flush": true, 00:04:40.648 "reset": true, 00:04:40.648 "nvme_admin": false, 00:04:40.648 "nvme_io": false, 00:04:40.648 "nvme_io_md": false, 00:04:40.648 "write_zeroes": true, 00:04:40.648 "zcopy": true, 00:04:40.648 "get_zone_info": false, 00:04:40.648 "zone_management": false, 00:04:40.648 "zone_append": false, 00:04:40.648 "compare": false, 00:04:40.648 "compare_and_write": false, 00:04:40.648 "abort": true, 00:04:40.648 "seek_hole": false, 00:04:40.648 "seek_data": false, 00:04:40.648 "copy": true, 00:04:40.648 "nvme_iov_md": false 00:04:40.648 }, 00:04:40.648 "memory_domains": [ 00:04:40.648 { 00:04:40.648 "dma_device_id": "system", 00:04:40.648 "dma_device_type": 1 00:04:40.648 }, 00:04:40.648 { 00:04:40.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.648 "dma_device_type": 2 00:04:40.648 } 00:04:40.648 ], 00:04:40.648 "driver_specific": {} 00:04:40.648 } 00:04:40.648 ]' 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 [2024-07-15 12:34:32.517131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.648 [2024-07-15 12:34:32.517164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.648 [2024-07-15 12:34:32.517182] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb61c0 00:04:40.648 [2024-07-15 12:34:32.517192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.648 [2024-07-15 12:34:32.518570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.648 [2024-07-15 12:34:32.518593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.648 Passthru0 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.648 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.648 { 00:04:40.648 "name": "Malloc2", 00:04:40.648 "aliases": [ 00:04:40.648 "04408261-1cb0-42eb-802d-fe6659d93632" 00:04:40.648 ], 00:04:40.648 "product_name": "Malloc disk", 00:04:40.648 "block_size": 512, 00:04:40.648 "num_blocks": 16384, 00:04:40.648 "uuid": "04408261-1cb0-42eb-802d-fe6659d93632", 00:04:40.648 "assigned_rate_limits": { 00:04:40.648 "rw_ios_per_sec": 0, 00:04:40.648 "rw_mbytes_per_sec": 0, 00:04:40.648 "r_mbytes_per_sec": 0, 00:04:40.648 "w_mbytes_per_sec": 0 00:04:40.648 }, 00:04:40.648 "claimed": true, 00:04:40.648 "claim_type": "exclusive_write", 00:04:40.648 "zoned": false, 00:04:40.648 "supported_io_types": { 00:04:40.648 "read": true, 00:04:40.648 "write": true, 00:04:40.648 "unmap": true, 00:04:40.648 "flush": true, 00:04:40.648 "reset": true, 00:04:40.648 "nvme_admin": false, 00:04:40.648 "nvme_io": false, 00:04:40.648 "nvme_io_md": false, 00:04:40.648 "write_zeroes": true, 00:04:40.648 "zcopy": true, 00:04:40.648 "get_zone_info": false, 00:04:40.648 "zone_management": false, 00:04:40.648 "zone_append": false, 00:04:40.648 "compare": false, 00:04:40.648 "compare_and_write": false, 00:04:40.648 "abort": true, 00:04:40.648 "seek_hole": false, 00:04:40.648 "seek_data": false, 00:04:40.648 "copy": true, 00:04:40.648 "nvme_iov_md": false 00:04:40.648 }, 00:04:40.648 "memory_domains": [ 00:04:40.648 { 00:04:40.648 "dma_device_id": "system", 00:04:40.648 "dma_device_type": 1 00:04:40.648 }, 00:04:40.648 { 00:04:40.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.648 "dma_device_type": 2 00:04:40.648 } 00:04:40.648 ], 00:04:40.648 "driver_specific": {} 00:04:40.648 }, 00:04:40.648 { 00:04:40.648 "name": "Passthru0", 00:04:40.648 "aliases": [ 00:04:40.648 "322824e2-72ac-59c0-be86-55accb3c4bc4" 00:04:40.648 ], 00:04:40.648 "product_name": "passthru", 00:04:40.648 "block_size": 512, 00:04:40.648 "num_blocks": 16384, 00:04:40.648 "uuid": "322824e2-72ac-59c0-be86-55accb3c4bc4", 00:04:40.648 "assigned_rate_limits": { 00:04:40.648 "rw_ios_per_sec": 0, 00:04:40.648 "rw_mbytes_per_sec": 0, 00:04:40.648 "r_mbytes_per_sec": 0, 00:04:40.648 "w_mbytes_per_sec": 0 00:04:40.648 }, 00:04:40.648 "claimed": false, 00:04:40.648 "zoned": false, 00:04:40.648 "supported_io_types": { 00:04:40.648 "read": true, 00:04:40.648 "write": true, 00:04:40.648 "unmap": true, 00:04:40.648 "flush": true, 00:04:40.648 "reset": true, 00:04:40.648 "nvme_admin": false, 00:04:40.648 "nvme_io": false, 00:04:40.648 "nvme_io_md": false, 00:04:40.648 "write_zeroes": true, 00:04:40.648 "zcopy": true, 00:04:40.648 "get_zone_info": false, 00:04:40.648 "zone_management": false, 00:04:40.648 "zone_append": false, 00:04:40.648 "compare": false, 00:04:40.648 "compare_and_write": false, 00:04:40.648 "abort": true, 00:04:40.648 "seek_hole": false, 00:04:40.648 "seek_data": false, 00:04:40.648 "copy": true, 00:04:40.648 "nvme_iov_md": false 00:04:40.648 }, 00:04:40.648 "memory_domains": [ 00:04:40.648 { 00:04:40.648 "dma_device_id": "system", 00:04:40.649 "dma_device_type": 1 00:04:40.649 }, 00:04:40.649 { 00:04:40.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.649 "dma_device_type": 2 00:04:40.649 } 00:04:40.649 ], 00:04:40.649 "driver_specific": { 00:04:40.649 "passthru": { 00:04:40.649 "name": "Passthru0", 00:04:40.649 "base_bdev_name": "Malloc2" 00:04:40.649 } 00:04:40.649 } 00:04:40.649 } 00:04:40.649 ]' 00:04:40.649 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.908 00:04:40.908 real 0m0.285s 00:04:40.908 user 0m0.180s 00:04:40.908 sys 0m0.046s 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.908 12:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 ************************************ 00:04:40.908 END TEST rpc_daemon_integrity 00:04:40.908 ************************************ 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.908 12:34:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.908 12:34:32 rpc -- rpc/rpc.sh@84 -- # killprocess 3729932 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 3729932 ']' 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@952 -- # kill -0 3729932 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3729932 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3729932' 00:04:40.908 killing process with pid 3729932 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@967 -- # kill 3729932 00:04:40.908 12:34:32 rpc -- common/autotest_common.sh@972 -- # wait 3729932 00:04:41.165 00:04:41.165 real 0m2.688s 00:04:41.165 user 0m3.563s 00:04:41.165 sys 0m0.721s 00:04:41.165 12:34:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.165 12:34:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.165 ************************************ 00:04:41.165 END TEST rpc 00:04:41.165 ************************************ 00:04:41.423 12:34:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.423 12:34:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.423 12:34:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.423 12:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.423 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.423 ************************************ 00:04:41.423 START TEST skip_rpc 00:04:41.423 ************************************ 00:04:41.423 12:34:33 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.423 * Looking for test storage... 00:04:41.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.423 12:34:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.423 12:34:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.423 12:34:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.423 12:34:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.423 12:34:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.423 12:34:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.423 ************************************ 00:04:41.423 START TEST skip_rpc 00:04:41.423 ************************************ 00:04:41.423 12:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:41.423 12:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3730632 00:04:41.423 12:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.423 12:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.423 12:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.423 [2024-07-15 12:34:33.320208] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:41.423 [2024-07-15 12:34:33.320266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730632 ] 00:04:41.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.682 [2024-07-15 12:34:33.396113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.682 [2024-07-15 12:34:33.485231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3730632 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3730632 ']' 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3730632 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3730632 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3730632' 00:04:46.950 killing process with pid 3730632 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3730632 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3730632 00:04:46.950 00:04:46.950 real 0m5.398s 00:04:46.950 user 0m5.128s 00:04:46.950 sys 0m0.291s 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.950 12:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.950 ************************************ 00:04:46.950 END TEST skip_rpc 00:04:46.950 ************************************ 00:04:46.950 12:34:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.950 12:34:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.950 12:34:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.950 12:34:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.950 12:34:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.950 ************************************ 00:04:46.950 START TEST skip_rpc_with_json 00:04:46.950 ************************************ 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3731701 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3731701 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3731701 ']' 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.950 12:34:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.950 [2024-07-15 12:34:38.790486] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:46.950 [2024-07-15 12:34:38.790547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731701 ] 00:04:46.950 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.950 [2024-07-15 12:34:38.870740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.209 [2024-07-15 12:34:38.953334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.468 [2024-07-15 12:34:39.182483] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.468 request: 00:04:47.468 { 00:04:47.468 "trtype": "tcp", 00:04:47.468 "method": "nvmf_get_transports", 00:04:47.468 "req_id": 1 00:04:47.468 } 00:04:47.468 Got JSON-RPC error response 00:04:47.468 response: 00:04:47.468 { 00:04:47.468 "code": -19, 00:04:47.468 "message": "No such device" 00:04:47.468 } 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.468 [2024-07-15 12:34:39.194624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.468 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.468 { 00:04:47.468 "subsystems": [ 00:04:47.468 { 00:04:47.468 "subsystem": "vfio_user_target", 00:04:47.468 "config": null 00:04:47.468 }, 00:04:47.468 { 00:04:47.468 "subsystem": "keyring", 00:04:47.468 "config": [] 00:04:47.468 }, 00:04:47.468 { 00:04:47.468 "subsystem": "iobuf", 00:04:47.468 "config": [ 00:04:47.468 { 00:04:47.468 "method": "iobuf_set_options", 00:04:47.468 "params": { 00:04:47.468 "small_pool_count": 8192, 00:04:47.468 "large_pool_count": 1024, 00:04:47.468 "small_bufsize": 8192, 00:04:47.468 "large_bufsize": 135168 00:04:47.468 } 00:04:47.468 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "sock", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "sock_set_default_impl", 00:04:47.469 "params": { 00:04:47.469 "impl_name": "posix" 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "sock_impl_set_options", 00:04:47.469 "params": { 00:04:47.469 "impl_name": "ssl", 00:04:47.469 "recv_buf_size": 4096, 00:04:47.469 "send_buf_size": 4096, 00:04:47.469 "enable_recv_pipe": true, 00:04:47.469 "enable_quickack": false, 00:04:47.469 "enable_placement_id": 0, 00:04:47.469 "enable_zerocopy_send_server": true, 00:04:47.469 "enable_zerocopy_send_client": false, 00:04:47.469 "zerocopy_threshold": 0, 00:04:47.469 "tls_version": 0, 00:04:47.469 "enable_ktls": false 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "sock_impl_set_options", 00:04:47.469 "params": { 00:04:47.469 "impl_name": "posix", 00:04:47.469 "recv_buf_size": 2097152, 00:04:47.469 "send_buf_size": 2097152, 00:04:47.469 "enable_recv_pipe": true, 00:04:47.469 "enable_quickack": false, 00:04:47.469 "enable_placement_id": 0, 00:04:47.469 "enable_zerocopy_send_server": true, 00:04:47.469 "enable_zerocopy_send_client": false, 00:04:47.469 "zerocopy_threshold": 0, 00:04:47.469 "tls_version": 0, 00:04:47.469 "enable_ktls": false 00:04:47.469 } 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "vmd", 00:04:47.469 "config": [] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "accel", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "accel_set_options", 00:04:47.469 "params": { 00:04:47.469 "small_cache_size": 128, 00:04:47.469 "large_cache_size": 16, 00:04:47.469 "task_count": 2048, 00:04:47.469 "sequence_count": 2048, 00:04:47.469 "buf_count": 2048 00:04:47.469 } 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "bdev", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "bdev_set_options", 00:04:47.469 "params": { 00:04:47.469 "bdev_io_pool_size": 65535, 00:04:47.469 "bdev_io_cache_size": 256, 00:04:47.469 "bdev_auto_examine": true, 00:04:47.469 "iobuf_small_cache_size": 128, 00:04:47.469 "iobuf_large_cache_size": 16 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "bdev_raid_set_options", 00:04:47.469 "params": { 00:04:47.469 "process_window_size_kb": 1024 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "bdev_iscsi_set_options", 00:04:47.469 "params": { 00:04:47.469 "timeout_sec": 30 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "bdev_nvme_set_options", 00:04:47.469 "params": { 00:04:47.469 "action_on_timeout": "none", 00:04:47.469 "timeout_us": 0, 00:04:47.469 "timeout_admin_us": 0, 00:04:47.469 "keep_alive_timeout_ms": 10000, 00:04:47.469 "arbitration_burst": 0, 00:04:47.469 "low_priority_weight": 0, 00:04:47.469 "medium_priority_weight": 0, 00:04:47.469 "high_priority_weight": 0, 00:04:47.469 "nvme_adminq_poll_period_us": 10000, 00:04:47.469 "nvme_ioq_poll_period_us": 0, 00:04:47.469 "io_queue_requests": 0, 00:04:47.469 "delay_cmd_submit": true, 00:04:47.469 "transport_retry_count": 4, 00:04:47.469 "bdev_retry_count": 3, 00:04:47.469 "transport_ack_timeout": 0, 00:04:47.469 "ctrlr_loss_timeout_sec": 0, 00:04:47.469 "reconnect_delay_sec": 0, 00:04:47.469 "fast_io_fail_timeout_sec": 0, 00:04:47.469 "disable_auto_failback": false, 00:04:47.469 "generate_uuids": false, 00:04:47.469 "transport_tos": 0, 00:04:47.469 "nvme_error_stat": false, 00:04:47.469 "rdma_srq_size": 0, 00:04:47.469 "io_path_stat": false, 00:04:47.469 "allow_accel_sequence": false, 00:04:47.469 "rdma_max_cq_size": 0, 00:04:47.469 "rdma_cm_event_timeout_ms": 0, 00:04:47.469 "dhchap_digests": [ 00:04:47.469 "sha256", 00:04:47.469 "sha384", 00:04:47.469 "sha512" 00:04:47.469 ], 00:04:47.469 "dhchap_dhgroups": [ 00:04:47.469 "null", 00:04:47.469 "ffdhe2048", 00:04:47.469 "ffdhe3072", 00:04:47.469 "ffdhe4096", 00:04:47.469 "ffdhe6144", 00:04:47.469 "ffdhe8192" 00:04:47.469 ] 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "bdev_nvme_set_hotplug", 00:04:47.469 "params": { 00:04:47.469 "period_us": 100000, 00:04:47.469 "enable": false 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "bdev_wait_for_examine" 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "scsi", 00:04:47.469 "config": null 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "scheduler", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "framework_set_scheduler", 00:04:47.469 "params": { 00:04:47.469 "name": "static" 00:04:47.469 } 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "vhost_scsi", 00:04:47.469 "config": [] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "vhost_blk", 00:04:47.469 "config": [] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "ublk", 00:04:47.469 "config": [] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "nbd", 00:04:47.469 "config": [] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "nvmf", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "nvmf_set_config", 00:04:47.469 "params": { 00:04:47.469 "discovery_filter": "match_any", 00:04:47.469 "admin_cmd_passthru": { 00:04:47.469 "identify_ctrlr": false 00:04:47.469 } 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "nvmf_set_max_subsystems", 00:04:47.469 "params": { 00:04:47.469 "max_subsystems": 1024 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "nvmf_set_crdt", 00:04:47.469 "params": { 00:04:47.469 "crdt1": 0, 00:04:47.469 "crdt2": 0, 00:04:47.469 "crdt3": 0 00:04:47.469 } 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "method": "nvmf_create_transport", 00:04:47.469 "params": { 00:04:47.469 "trtype": "TCP", 00:04:47.469 "max_queue_depth": 128, 00:04:47.469 "max_io_qpairs_per_ctrlr": 127, 00:04:47.469 "in_capsule_data_size": 4096, 00:04:47.469 "max_io_size": 131072, 00:04:47.469 "io_unit_size": 131072, 00:04:47.469 "max_aq_depth": 128, 00:04:47.469 "num_shared_buffers": 511, 00:04:47.469 "buf_cache_size": 4294967295, 00:04:47.469 "dif_insert_or_strip": false, 00:04:47.469 "zcopy": false, 00:04:47.469 "c2h_success": true, 00:04:47.469 "sock_priority": 0, 00:04:47.469 "abort_timeout_sec": 1, 00:04:47.469 "ack_timeout": 0, 00:04:47.469 "data_wr_pool_size": 0 00:04:47.469 } 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 }, 00:04:47.469 { 00:04:47.469 "subsystem": "iscsi", 00:04:47.469 "config": [ 00:04:47.469 { 00:04:47.469 "method": "iscsi_set_options", 00:04:47.469 "params": { 00:04:47.469 "node_base": "iqn.2016-06.io.spdk", 00:04:47.469 "max_sessions": 128, 00:04:47.469 "max_connections_per_session": 2, 00:04:47.469 "max_queue_depth": 64, 00:04:47.469 "default_time2wait": 2, 00:04:47.469 "default_time2retain": 20, 00:04:47.469 "first_burst_length": 8192, 00:04:47.469 "immediate_data": true, 00:04:47.469 "allow_duplicated_isid": false, 00:04:47.469 "error_recovery_level": 0, 00:04:47.469 "nop_timeout": 60, 00:04:47.469 "nop_in_interval": 30, 00:04:47.469 "disable_chap": false, 00:04:47.469 "require_chap": false, 00:04:47.469 "mutual_chap": false, 00:04:47.469 "chap_group": 0, 00:04:47.469 "max_large_datain_per_connection": 64, 00:04:47.469 "max_r2t_per_connection": 4, 00:04:47.469 "pdu_pool_size": 36864, 00:04:47.469 "immediate_data_pool_size": 16384, 00:04:47.469 "data_out_pool_size": 2048 00:04:47.469 } 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 } 00:04:47.469 ] 00:04:47.469 } 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3731701 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3731701 ']' 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3731701 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731701 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731701' 00:04:47.469 killing process with pid 3731701 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3731701 00:04:47.469 12:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3731701 00:04:48.036 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.036 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3731828 00:04:48.036 12:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3731828 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3731828 ']' 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3731828 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731828 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731828' 00:04:53.302 killing process with pid 3731828 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3731828 00:04:53.302 12:34:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3731828 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.302 00:04:53.302 real 0m6.405s 00:04:53.302 user 0m6.089s 00:04:53.302 sys 0m0.642s 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.302 ************************************ 00:04:53.302 END TEST skip_rpc_with_json 00:04:53.302 ************************************ 00:04:53.302 12:34:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.302 12:34:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.302 12:34:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.302 12:34:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.302 12:34:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.302 ************************************ 00:04:53.302 START TEST skip_rpc_with_delay 00:04:53.302 ************************************ 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.302 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.561 [2024-07-15 12:34:45.262772] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.561 [2024-07-15 12:34:45.262854] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.561 00:04:53.561 real 0m0.076s 00:04:53.561 user 0m0.050s 00:04:53.561 sys 0m0.025s 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.561 12:34:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.561 ************************************ 00:04:53.561 END TEST skip_rpc_with_delay 00:04:53.561 ************************************ 00:04:53.561 12:34:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.561 12:34:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.561 12:34:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.561 12:34:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.561 12:34:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.561 12:34:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.561 12:34:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.561 ************************************ 00:04:53.561 START TEST exit_on_failed_rpc_init 00:04:53.561 ************************************ 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3732849 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3732849 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3732849 ']' 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.561 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.561 [2024-07-15 12:34:45.408185] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:53.561 [2024-07-15 12:34:45.408238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732849 ] 00:04:53.561 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.561 [2024-07-15 12:34:45.488515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.820 [2024-07-15 12:34:45.586815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.080 12:34:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.080 [2024-07-15 12:34:45.920459] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:54.080 [2024-07-15 12:34:45.920519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733096 ] 00:04:54.080 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.080 [2024-07-15 12:34:46.000237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.339 [2024-07-15 12:34:46.101190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.339 [2024-07-15 12:34:46.101283] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.339 [2024-07-15 12:34:46.101300] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.339 [2024-07-15 12:34:46.101312] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3732849 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3732849 ']' 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3732849 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3732849 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3732849' 00:04:54.339 killing process with pid 3732849 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3732849 00:04:54.339 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3732849 00:04:54.906 00:04:54.907 real 0m1.238s 00:04:54.907 user 0m1.671s 00:04:54.907 sys 0m0.436s 00:04:54.907 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.907 12:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.907 ************************************ 00:04:54.907 END TEST exit_on_failed_rpc_init 00:04:54.907 ************************************ 00:04:54.907 12:34:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.907 12:34:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.907 00:04:54.907 real 0m13.487s 00:04:54.907 user 0m13.075s 00:04:54.907 sys 0m1.654s 00:04:54.907 12:34:46 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.907 12:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.907 ************************************ 00:04:54.907 END TEST skip_rpc 00:04:54.907 ************************************ 00:04:54.907 12:34:46 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.907 12:34:46 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.907 12:34:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.907 12:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.907 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:54.907 ************************************ 00:04:54.907 START TEST rpc_client 00:04:54.907 ************************************ 00:04:54.907 12:34:46 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.907 * Looking for test storage... 00:04:54.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:54.907 12:34:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:54.907 OK 00:04:54.907 12:34:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.907 00:04:54.907 real 0m0.114s 00:04:54.907 user 0m0.055s 00:04:54.907 sys 0m0.067s 00:04:54.907 12:34:46 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.907 12:34:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.907 ************************************ 00:04:54.907 END TEST rpc_client 00:04:54.907 ************************************ 00:04:55.166 12:34:46 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.166 12:34:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.166 12:34:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.166 12:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.166 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.166 ************************************ 00:04:55.166 START TEST json_config 00:04:55.166 ************************************ 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.166 12:34:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.166 12:34:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.166 12:34:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.166 12:34:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.166 12:34:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.166 12:34:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.166 12:34:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.166 12:34:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.166 12:34:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:55.166 INFO: JSON configuration test init 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.166 12:34:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.166 12:34:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.166 12:34:46 json_config -- json_config/common.sh@10 -- # shift 00:04:55.166 12:34:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.166 12:34:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.166 12:34:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.166 12:34:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.166 12:34:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.166 12:34:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3733340 00:04:55.166 12:34:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.166 Waiting for target to run... 00:04:55.166 12:34:46 json_config -- json_config/common.sh@25 -- # waitforlisten 3733340 /var/tmp/spdk_tgt.sock 00:04:55.166 12:34:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 3733340 ']' 00:04:55.166 12:34:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.167 12:34:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.167 12:34:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.167 12:34:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.167 12:34:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.167 12:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.167 [2024-07-15 12:34:47.051092] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:04:55.167 [2024-07-15 12:34:47.051158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733340 ] 00:04:55.167 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.426 [2024-07-15 12:34:47.355467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.685 [2024-07-15 12:34:47.435135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.253 12:34:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.253 12:34:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:56.253 12:34:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.253 00:04:56.253 12:34:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:56.253 12:34:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:56.253 12:34:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.253 12:34:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 12:34:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:56.253 12:34:48 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:56.253 12:34:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.253 12:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.253 12:34:48 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.253 12:34:48 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:56.253 12:34:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:59.542 12:34:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.542 12:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:59.542 12:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:59.542 12:34:51 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:59.542 12:34:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.542 12:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.800 12:34:51 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:59.801 12:34:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.801 12:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:59.801 12:34:51 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.801 12:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.801 MallocForNvmf0 00:05:00.060 12:34:51 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.060 12:34:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.318 MallocForNvmf1 00:05:00.318 12:34:52 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.318 12:34:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.886 [2024-07-15 12:34:52.704692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.886 12:34:52 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.886 12:34:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.146 12:34:52 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.146 12:34:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.453 12:34:53 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.453 12:34:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.765 12:34:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.765 12:34:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.024 [2024-07-15 12:34:53.711968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.024 12:34:53 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:02.024 12:34:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.024 12:34:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.024 12:34:53 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:02.024 12:34:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.024 12:34:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.024 12:34:53 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:02.024 12:34:53 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.024 12:34:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.282 MallocBdevForConfigChangeCheck 00:05:02.282 12:34:54 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:02.282 12:34:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.282 12:34:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.282 12:34:54 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:02.282 12:34:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.541 12:34:54 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:02.541 INFO: shutting down applications... 00:05:02.541 12:34:54 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:02.541 12:34:54 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:02.541 12:34:54 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:02.541 12:34:54 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.444 Calling clear_iscsi_subsystem 00:05:04.444 Calling clear_nvmf_subsystem 00:05:04.444 Calling clear_nbd_subsystem 00:05:04.444 Calling clear_ublk_subsystem 00:05:04.444 Calling clear_vhost_blk_subsystem 00:05:04.444 Calling clear_vhost_scsi_subsystem 00:05:04.444 Calling clear_bdev_subsystem 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.444 12:34:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.702 12:34:56 json_config -- json_config/json_config.sh@345 -- # break 00:05:04.702 12:34:56 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:04.702 12:34:56 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:04.702 12:34:56 json_config -- json_config/common.sh@31 -- # local app=target 00:05:04.702 12:34:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.702 12:34:56 json_config -- json_config/common.sh@35 -- # [[ -n 3733340 ]] 00:05:04.702 12:34:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3733340 00:05:04.702 12:34:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.702 12:34:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.702 12:34:56 json_config -- json_config/common.sh@41 -- # kill -0 3733340 00:05:04.702 12:34:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.269 12:34:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.269 12:34:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.269 12:34:57 json_config -- json_config/common.sh@41 -- # kill -0 3733340 00:05:05.269 12:34:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.269 12:34:57 json_config -- json_config/common.sh@43 -- # break 00:05:05.269 12:34:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.269 12:34:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.269 SPDK target shutdown done 00:05:05.269 12:34:57 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:05.269 INFO: relaunching applications... 00:05:05.269 12:34:57 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.269 12:34:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.269 12:34:57 json_config -- json_config/common.sh@10 -- # shift 00:05:05.269 12:34:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.269 12:34:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.269 12:34:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.269 12:34:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.269 12:34:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.269 12:34:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3735289 00:05:05.269 12:34:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.269 Waiting for target to run... 00:05:05.269 12:34:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3735289 /var/tmp/spdk_tgt.sock 00:05:05.269 12:34:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 3735289 ']' 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.269 12:34:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.269 [2024-07-15 12:34:57.106261] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:05.269 [2024-07-15 12:34:57.106331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735289 ] 00:05:05.269 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.528 [2024-07-15 12:34:57.410961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.787 [2024-07-15 12:34:57.488487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.068 [2024-07-15 12:35:00.536636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.069 [2024-07-15 12:35:00.568981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.069 12:35:00 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.069 12:35:00 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:09.069 12:35:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.069 00:05:09.069 12:35:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:09.069 12:35:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.069 INFO: Checking if target configuration is the same... 00:05:09.069 12:35:00 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.069 12:35:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:09.069 12:35:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.069 + '[' 2 -ne 2 ']' 00:05:09.069 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.069 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.069 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.069 +++ basename /dev/fd/62 00:05:09.069 ++ mktemp /tmp/62.XXX 00:05:09.069 + tmp_file_1=/tmp/62.hyN 00:05:09.069 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.069 + tmp_file_2=/tmp/spdk_tgt_config.json.OdV 00:05:09.069 + ret=0 00:05:09.069 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.069 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.327 + diff -u /tmp/62.hyN /tmp/spdk_tgt_config.json.OdV 00:05:09.327 + echo 'INFO: JSON config files are the same' 00:05:09.327 INFO: JSON config files are the same 00:05:09.327 + rm /tmp/62.hyN /tmp/spdk_tgt_config.json.OdV 00:05:09.327 + exit 0 00:05:09.327 12:35:01 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:09.327 12:35:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.327 INFO: changing configuration and checking if this can be detected... 00:05:09.327 12:35:01 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.328 12:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.586 12:35:01 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.586 12:35:01 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:09.586 12:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.586 + '[' 2 -ne 2 ']' 00:05:09.586 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.586 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.586 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.586 +++ basename /dev/fd/62 00:05:09.586 ++ mktemp /tmp/62.XXX 00:05:09.586 + tmp_file_1=/tmp/62.sse 00:05:09.586 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.586 + tmp_file_2=/tmp/spdk_tgt_config.json.xj2 00:05:09.586 + ret=0 00:05:09.586 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.845 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.845 + diff -u /tmp/62.sse /tmp/spdk_tgt_config.json.xj2 00:05:09.845 + ret=1 00:05:09.845 + echo '=== Start of file: /tmp/62.sse ===' 00:05:09.845 + cat /tmp/62.sse 00:05:09.845 + echo '=== End of file: /tmp/62.sse ===' 00:05:09.845 + echo '' 00:05:09.845 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xj2 ===' 00:05:09.845 + cat /tmp/spdk_tgt_config.json.xj2 00:05:09.845 + echo '=== End of file: /tmp/spdk_tgt_config.json.xj2 ===' 00:05:09.845 + echo '' 00:05:09.845 + rm /tmp/62.sse /tmp/spdk_tgt_config.json.xj2 00:05:09.845 + exit 1 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:09.845 INFO: configuration change detected. 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:09.845 12:35:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.845 12:35:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 3735289 ]] 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:09.845 12:35:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.845 12:35:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:09.845 12:35:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:10.104 12:35:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:10.104 12:35:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.104 12:35:01 json_config -- json_config/json_config.sh@323 -- # killprocess 3735289 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@948 -- # '[' -z 3735289 ']' 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@952 -- # kill -0 3735289 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@953 -- # uname 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3735289 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3735289' 00:05:10.104 killing process with pid 3735289 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@967 -- # kill 3735289 00:05:10.104 12:35:01 json_config -- common/autotest_common.sh@972 -- # wait 3735289 00:05:11.487 12:35:03 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.487 12:35:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:11.487 12:35:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.487 12:35:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 12:35:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:11.746 12:35:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:11.746 INFO: Success 00:05:11.746 00:05:11.746 real 0m16.568s 00:05:11.746 user 0m19.166s 00:05:11.746 sys 0m1.990s 00:05:11.746 12:35:03 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.746 12:35:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 ************************************ 00:05:11.746 END TEST json_config 00:05:11.746 ************************************ 00:05:11.746 12:35:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.746 12:35:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.746 12:35:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.746 12:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.746 12:35:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 ************************************ 00:05:11.746 START TEST json_config_extra_key 00:05:11.746 ************************************ 00:05:11.746 12:35:03 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.746 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.746 12:35:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.746 12:35:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.746 12:35:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.746 12:35:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.746 12:35:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.746 12:35:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.747 12:35:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.747 12:35:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.747 12:35:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.747 12:35:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.747 INFO: launching applications... 00:05:11.747 12:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3736606 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.747 Waiting for target to run... 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3736606 /var/tmp/spdk_tgt.sock 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3736606 ']' 00:05:11.747 12:35:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.747 12:35:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.747 [2024-07-15 12:35:03.673323] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:11.747 [2024-07-15 12:35:03.673369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736606 ] 00:05:12.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.264 [2024-07-15 12:35:04.093991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.264 [2024-07-15 12:35:04.196496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.828 12:35:04 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.828 12:35:04 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:12.828 00:05:12.828 12:35:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:12.828 INFO: shutting down applications... 00:05:12.828 12:35:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3736606 ]] 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3736606 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3736606 00:05:12.828 12:35:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3736606 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.395 12:35:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.395 SPDK target shutdown done 00:05:13.395 12:35:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.395 Success 00:05:13.395 00:05:13.395 real 0m1.600s 00:05:13.395 user 0m1.377s 00:05:13.395 sys 0m0.552s 00:05:13.395 12:35:05 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.395 12:35:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.395 ************************************ 00:05:13.395 END TEST json_config_extra_key 00:05:13.395 ************************************ 00:05:13.395 12:35:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.395 12:35:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.395 12:35:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.395 12:35:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.395 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.395 ************************************ 00:05:13.395 START TEST alias_rpc 00:05:13.395 ************************************ 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.395 * Looking for test storage... 00:05:13.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:13.395 12:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.395 12:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3736923 00:05:13.395 12:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3736923 00:05:13.395 12:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3736923 ']' 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.395 12:35:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.655 [2024-07-15 12:35:05.347211] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:13.655 [2024-07-15 12:35:05.347281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736923 ] 00:05:13.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.655 [2024-07-15 12:35:05.428133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.655 [2024-07-15 12:35:05.517561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.591 12:35:06 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.591 12:35:06 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.591 12:35:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:14.850 12:35:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3736923 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3736923 ']' 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3736923 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3736923 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3736923' 00:05:14.850 killing process with pid 3736923 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@967 -- # kill 3736923 00:05:14.850 12:35:06 alias_rpc -- common/autotest_common.sh@972 -- # wait 3736923 00:05:15.109 00:05:15.109 real 0m1.733s 00:05:15.109 user 0m2.017s 00:05:15.109 sys 0m0.462s 00:05:15.109 12:35:06 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.109 12:35:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.109 ************************************ 00:05:15.109 END TEST alias_rpc 00:05:15.109 ************************************ 00:05:15.109 12:35:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.109 12:35:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:15.109 12:35:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:15.109 12:35:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.109 12:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.109 12:35:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.109 ************************************ 00:05:15.109 START TEST spdkcli_tcp 00:05:15.109 ************************************ 00:05:15.109 12:35:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:15.368 * Looking for test storage... 00:05:15.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3737307 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3737307 00:05:15.368 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3737307 ']' 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.368 12:35:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.368 [2024-07-15 12:35:07.161780] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:15.368 [2024-07-15 12:35:07.161843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737307 ] 00:05:15.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.368 [2024-07-15 12:35:07.238033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.627 [2024-07-15 12:35:07.332284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.627 [2024-07-15 12:35:07.332290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.627 12:35:07 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.627 12:35:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:15.627 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3737494 00:05:15.627 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:15.627 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:15.886 [ 00:05:15.886 "bdev_malloc_delete", 00:05:15.886 "bdev_malloc_create", 00:05:15.886 "bdev_null_resize", 00:05:15.886 "bdev_null_delete", 00:05:15.886 "bdev_null_create", 00:05:15.886 "bdev_nvme_cuse_unregister", 00:05:15.886 "bdev_nvme_cuse_register", 00:05:15.886 "bdev_opal_new_user", 00:05:15.886 "bdev_opal_set_lock_state", 00:05:15.886 "bdev_opal_delete", 00:05:15.886 "bdev_opal_get_info", 00:05:15.886 "bdev_opal_create", 00:05:15.886 "bdev_nvme_opal_revert", 00:05:15.886 "bdev_nvme_opal_init", 00:05:15.886 "bdev_nvme_send_cmd", 00:05:15.886 "bdev_nvme_get_path_iostat", 00:05:15.886 "bdev_nvme_get_mdns_discovery_info", 00:05:15.886 "bdev_nvme_stop_mdns_discovery", 00:05:15.886 "bdev_nvme_start_mdns_discovery", 00:05:15.886 "bdev_nvme_set_multipath_policy", 00:05:15.886 "bdev_nvme_set_preferred_path", 00:05:15.886 "bdev_nvme_get_io_paths", 00:05:15.886 "bdev_nvme_remove_error_injection", 00:05:15.886 "bdev_nvme_add_error_injection", 00:05:15.886 "bdev_nvme_get_discovery_info", 00:05:15.886 "bdev_nvme_stop_discovery", 00:05:15.887 "bdev_nvme_start_discovery", 00:05:15.887 "bdev_nvme_get_controller_health_info", 00:05:15.887 "bdev_nvme_disable_controller", 00:05:15.887 "bdev_nvme_enable_controller", 00:05:15.887 "bdev_nvme_reset_controller", 00:05:15.887 "bdev_nvme_get_transport_statistics", 00:05:15.887 "bdev_nvme_apply_firmware", 00:05:15.887 "bdev_nvme_detach_controller", 00:05:15.887 "bdev_nvme_get_controllers", 00:05:15.887 "bdev_nvme_attach_controller", 00:05:15.887 "bdev_nvme_set_hotplug", 00:05:15.887 "bdev_nvme_set_options", 00:05:15.887 "bdev_passthru_delete", 00:05:15.887 "bdev_passthru_create", 00:05:15.887 "bdev_lvol_set_parent_bdev", 00:05:15.887 "bdev_lvol_set_parent", 00:05:15.887 "bdev_lvol_check_shallow_copy", 00:05:15.887 "bdev_lvol_start_shallow_copy", 00:05:15.887 "bdev_lvol_grow_lvstore", 00:05:15.887 "bdev_lvol_get_lvols", 00:05:15.887 "bdev_lvol_get_lvstores", 00:05:15.887 "bdev_lvol_delete", 00:05:15.887 "bdev_lvol_set_read_only", 00:05:15.887 "bdev_lvol_resize", 00:05:15.887 "bdev_lvol_decouple_parent", 00:05:15.887 "bdev_lvol_inflate", 00:05:15.887 "bdev_lvol_rename", 00:05:15.887 "bdev_lvol_clone_bdev", 00:05:15.887 "bdev_lvol_clone", 00:05:15.887 "bdev_lvol_snapshot", 00:05:15.887 "bdev_lvol_create", 00:05:15.887 "bdev_lvol_delete_lvstore", 00:05:15.887 "bdev_lvol_rename_lvstore", 00:05:15.887 "bdev_lvol_create_lvstore", 00:05:15.887 "bdev_raid_set_options", 00:05:15.887 "bdev_raid_remove_base_bdev", 00:05:15.887 "bdev_raid_add_base_bdev", 00:05:15.887 "bdev_raid_delete", 00:05:15.887 "bdev_raid_create", 00:05:15.887 "bdev_raid_get_bdevs", 00:05:15.887 "bdev_error_inject_error", 00:05:15.887 "bdev_error_delete", 00:05:15.887 "bdev_error_create", 00:05:15.887 "bdev_split_delete", 00:05:15.887 "bdev_split_create", 00:05:15.887 "bdev_delay_delete", 00:05:15.887 "bdev_delay_create", 00:05:15.887 "bdev_delay_update_latency", 00:05:15.887 "bdev_zone_block_delete", 00:05:15.887 "bdev_zone_block_create", 00:05:15.887 "blobfs_create", 00:05:15.887 "blobfs_detect", 00:05:15.887 "blobfs_set_cache_size", 00:05:15.887 "bdev_aio_delete", 00:05:15.887 "bdev_aio_rescan", 00:05:15.887 "bdev_aio_create", 00:05:15.887 "bdev_ftl_set_property", 00:05:15.887 "bdev_ftl_get_properties", 00:05:15.887 "bdev_ftl_get_stats", 00:05:15.887 "bdev_ftl_unmap", 00:05:15.887 "bdev_ftl_unload", 00:05:15.887 "bdev_ftl_delete", 00:05:15.887 "bdev_ftl_load", 00:05:15.887 "bdev_ftl_create", 00:05:15.887 "bdev_virtio_attach_controller", 00:05:15.887 "bdev_virtio_scsi_get_devices", 00:05:15.887 "bdev_virtio_detach_controller", 00:05:15.887 "bdev_virtio_blk_set_hotplug", 00:05:15.887 "bdev_iscsi_delete", 00:05:15.887 "bdev_iscsi_create", 00:05:15.887 "bdev_iscsi_set_options", 00:05:15.887 "accel_error_inject_error", 00:05:15.887 "ioat_scan_accel_module", 00:05:15.887 "dsa_scan_accel_module", 00:05:15.887 "iaa_scan_accel_module", 00:05:15.887 "vfu_virtio_create_scsi_endpoint", 00:05:15.887 "vfu_virtio_scsi_remove_target", 00:05:15.887 "vfu_virtio_scsi_add_target", 00:05:15.887 "vfu_virtio_create_blk_endpoint", 00:05:15.887 "vfu_virtio_delete_endpoint", 00:05:15.887 "keyring_file_remove_key", 00:05:15.887 "keyring_file_add_key", 00:05:15.887 "keyring_linux_set_options", 00:05:15.887 "iscsi_get_histogram", 00:05:15.887 "iscsi_enable_histogram", 00:05:15.887 "iscsi_set_options", 00:05:15.887 "iscsi_get_auth_groups", 00:05:15.887 "iscsi_auth_group_remove_secret", 00:05:15.887 "iscsi_auth_group_add_secret", 00:05:15.887 "iscsi_delete_auth_group", 00:05:15.887 "iscsi_create_auth_group", 00:05:15.887 "iscsi_set_discovery_auth", 00:05:15.887 "iscsi_get_options", 00:05:15.887 "iscsi_target_node_request_logout", 00:05:15.887 "iscsi_target_node_set_redirect", 00:05:15.887 "iscsi_target_node_set_auth", 00:05:15.887 "iscsi_target_node_add_lun", 00:05:15.887 "iscsi_get_stats", 00:05:15.887 "iscsi_get_connections", 00:05:15.887 "iscsi_portal_group_set_auth", 00:05:15.887 "iscsi_start_portal_group", 00:05:15.887 "iscsi_delete_portal_group", 00:05:15.887 "iscsi_create_portal_group", 00:05:15.887 "iscsi_get_portal_groups", 00:05:15.887 "iscsi_delete_target_node", 00:05:15.887 "iscsi_target_node_remove_pg_ig_maps", 00:05:15.887 "iscsi_target_node_add_pg_ig_maps", 00:05:15.887 "iscsi_create_target_node", 00:05:15.887 "iscsi_get_target_nodes", 00:05:15.887 "iscsi_delete_initiator_group", 00:05:15.887 "iscsi_initiator_group_remove_initiators", 00:05:15.887 "iscsi_initiator_group_add_initiators", 00:05:15.887 "iscsi_create_initiator_group", 00:05:15.887 "iscsi_get_initiator_groups", 00:05:15.887 "nvmf_set_crdt", 00:05:15.887 "nvmf_set_config", 00:05:15.887 "nvmf_set_max_subsystems", 00:05:15.887 "nvmf_stop_mdns_prr", 00:05:15.887 "nvmf_publish_mdns_prr", 00:05:15.887 "nvmf_subsystem_get_listeners", 00:05:15.887 "nvmf_subsystem_get_qpairs", 00:05:15.887 "nvmf_subsystem_get_controllers", 00:05:15.887 "nvmf_get_stats", 00:05:15.887 "nvmf_get_transports", 00:05:15.887 "nvmf_create_transport", 00:05:15.887 "nvmf_get_targets", 00:05:15.887 "nvmf_delete_target", 00:05:15.887 "nvmf_create_target", 00:05:15.887 "nvmf_subsystem_allow_any_host", 00:05:15.887 "nvmf_subsystem_remove_host", 00:05:15.887 "nvmf_subsystem_add_host", 00:05:15.887 "nvmf_ns_remove_host", 00:05:15.887 "nvmf_ns_add_host", 00:05:15.887 "nvmf_subsystem_remove_ns", 00:05:15.887 "nvmf_subsystem_add_ns", 00:05:15.887 "nvmf_subsystem_listener_set_ana_state", 00:05:15.887 "nvmf_discovery_get_referrals", 00:05:15.887 "nvmf_discovery_remove_referral", 00:05:15.887 "nvmf_discovery_add_referral", 00:05:15.887 "nvmf_subsystem_remove_listener", 00:05:15.887 "nvmf_subsystem_add_listener", 00:05:15.887 "nvmf_delete_subsystem", 00:05:15.887 "nvmf_create_subsystem", 00:05:15.887 "nvmf_get_subsystems", 00:05:15.887 "env_dpdk_get_mem_stats", 00:05:15.887 "nbd_get_disks", 00:05:15.887 "nbd_stop_disk", 00:05:15.887 "nbd_start_disk", 00:05:15.887 "ublk_recover_disk", 00:05:15.887 "ublk_get_disks", 00:05:15.887 "ublk_stop_disk", 00:05:15.887 "ublk_start_disk", 00:05:15.887 "ublk_destroy_target", 00:05:15.887 "ublk_create_target", 00:05:15.887 "virtio_blk_create_transport", 00:05:15.887 "virtio_blk_get_transports", 00:05:15.887 "vhost_controller_set_coalescing", 00:05:15.887 "vhost_get_controllers", 00:05:15.887 "vhost_delete_controller", 00:05:15.887 "vhost_create_blk_controller", 00:05:15.887 "vhost_scsi_controller_remove_target", 00:05:15.887 "vhost_scsi_controller_add_target", 00:05:15.887 "vhost_start_scsi_controller", 00:05:15.887 "vhost_create_scsi_controller", 00:05:15.887 "thread_set_cpumask", 00:05:15.887 "framework_get_governor", 00:05:15.887 "framework_get_scheduler", 00:05:15.887 "framework_set_scheduler", 00:05:15.887 "framework_get_reactors", 00:05:15.887 "thread_get_io_channels", 00:05:15.887 "thread_get_pollers", 00:05:15.887 "thread_get_stats", 00:05:15.887 "framework_monitor_context_switch", 00:05:15.887 "spdk_kill_instance", 00:05:15.887 "log_enable_timestamps", 00:05:15.887 "log_get_flags", 00:05:15.887 "log_clear_flag", 00:05:15.887 "log_set_flag", 00:05:15.887 "log_get_level", 00:05:15.887 "log_set_level", 00:05:15.887 "log_get_print_level", 00:05:15.887 "log_set_print_level", 00:05:15.887 "framework_enable_cpumask_locks", 00:05:15.887 "framework_disable_cpumask_locks", 00:05:15.887 "framework_wait_init", 00:05:15.887 "framework_start_init", 00:05:15.887 "scsi_get_devices", 00:05:15.887 "bdev_get_histogram", 00:05:15.887 "bdev_enable_histogram", 00:05:15.887 "bdev_set_qos_limit", 00:05:15.887 "bdev_set_qd_sampling_period", 00:05:15.887 "bdev_get_bdevs", 00:05:15.887 "bdev_reset_iostat", 00:05:15.887 "bdev_get_iostat", 00:05:15.887 "bdev_examine", 00:05:15.887 "bdev_wait_for_examine", 00:05:15.887 "bdev_set_options", 00:05:15.887 "notify_get_notifications", 00:05:15.887 "notify_get_types", 00:05:15.887 "accel_get_stats", 00:05:15.887 "accel_set_options", 00:05:15.887 "accel_set_driver", 00:05:15.887 "accel_crypto_key_destroy", 00:05:15.887 "accel_crypto_keys_get", 00:05:15.887 "accel_crypto_key_create", 00:05:15.887 "accel_assign_opc", 00:05:15.887 "accel_get_module_info", 00:05:15.887 "accel_get_opc_assignments", 00:05:15.887 "vmd_rescan", 00:05:15.887 "vmd_remove_device", 00:05:15.887 "vmd_enable", 00:05:15.887 "sock_get_default_impl", 00:05:15.887 "sock_set_default_impl", 00:05:15.887 "sock_impl_set_options", 00:05:15.887 "sock_impl_get_options", 00:05:15.887 "iobuf_get_stats", 00:05:15.887 "iobuf_set_options", 00:05:15.887 "keyring_get_keys", 00:05:15.887 "framework_get_pci_devices", 00:05:15.887 "framework_get_config", 00:05:15.887 "framework_get_subsystems", 00:05:15.887 "vfu_tgt_set_base_path", 00:05:15.887 "trace_get_info", 00:05:15.887 "trace_get_tpoint_group_mask", 00:05:15.887 "trace_disable_tpoint_group", 00:05:15.887 "trace_enable_tpoint_group", 00:05:15.887 "trace_clear_tpoint_mask", 00:05:15.887 "trace_set_tpoint_mask", 00:05:15.887 "spdk_get_version", 00:05:15.887 "rpc_get_methods" 00:05:15.887 ] 00:05:15.887 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:15.887 12:35:07 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.887 12:35:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.147 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.147 12:35:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3737307 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3737307 ']' 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3737307 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3737307 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3737307' 00:05:16.147 killing process with pid 3737307 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3737307 00:05:16.147 12:35:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3737307 00:05:16.406 00:05:16.406 real 0m1.231s 00:05:16.406 user 0m2.215s 00:05:16.406 sys 0m0.446s 00:05:16.406 12:35:08 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.406 12:35:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.406 ************************************ 00:05:16.406 END TEST spdkcli_tcp 00:05:16.406 ************************************ 00:05:16.406 12:35:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.406 12:35:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.406 12:35:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.406 12:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.406 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:16.406 ************************************ 00:05:16.406 START TEST dpdk_mem_utility 00:05:16.406 ************************************ 00:05:16.406 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.664 * Looking for test storage... 00:05:16.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:16.664 12:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.664 12:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3737572 00:05:16.664 12:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3737572 00:05:16.664 12:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3737572 ']' 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.664 12:35:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.664 [2024-07-15 12:35:08.450816] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:16.664 [2024-07-15 12:35:08.450875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737572 ] 00:05:16.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.664 [2024-07-15 12:35:08.532708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.923 [2024-07-15 12:35:08.623868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.490 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.490 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:17.490 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.490 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.490 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.490 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.490 { 00:05:17.490 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.490 } 00:05:17.490 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.490 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:17.749 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:17.749 1 heaps totaling size 814.000000 MiB 00:05:17.749 size: 814.000000 MiB heap id: 0 00:05:17.749 end heaps---------- 00:05:17.749 8 mempools totaling size 598.116089 MiB 00:05:17.749 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.749 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.749 size: 84.521057 MiB name: bdev_io_3737572 00:05:17.749 size: 51.011292 MiB name: evtpool_3737572 00:05:17.749 size: 50.003479 MiB name: msgpool_3737572 00:05:17.749 size: 21.763794 MiB name: PDU_Pool 00:05:17.749 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.749 size: 0.026123 MiB name: Session_Pool 00:05:17.749 end mempools------- 00:05:17.749 6 memzones totaling size 4.142822 MiB 00:05:17.749 size: 1.000366 MiB name: RG_ring_0_3737572 00:05:17.749 size: 1.000366 MiB name: RG_ring_1_3737572 00:05:17.749 size: 1.000366 MiB name: RG_ring_4_3737572 00:05:17.749 size: 1.000366 MiB name: RG_ring_5_3737572 00:05:17.749 size: 0.125366 MiB name: RG_ring_2_3737572 00:05:17.749 size: 0.015991 MiB name: RG_ring_3_3737572 00:05:17.749 end memzones------- 00:05:17.749 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.749 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:17.749 list of free elements. size: 12.519348 MiB 00:05:17.749 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:17.749 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:17.749 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:17.749 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:17.749 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:17.749 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:17.749 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:17.749 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:17.749 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:17.749 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:17.749 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:17.749 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:17.749 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:17.749 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:17.749 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:17.749 list of standard malloc elements. size: 199.218079 MiB 00:05:17.749 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:17.749 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:17.749 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:17.749 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:17.749 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:17.749 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.749 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:17.749 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.749 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:17.749 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:17.749 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:17.749 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:17.749 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:17.749 list of memzone associated elements. size: 602.262573 MiB 00:05:17.749 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:17.749 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.749 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:17.749 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.749 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:17.749 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3737572_0 00:05:17.749 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:17.749 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3737572_0 00:05:17.749 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:17.749 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3737572_0 00:05:17.749 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:17.749 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.749 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:17.749 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.749 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:17.749 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3737572 00:05:17.749 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:17.749 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3737572 00:05:17.749 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.749 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3737572 00:05:17.749 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:17.749 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.749 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:17.749 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.749 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:17.749 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.749 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:17.749 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.749 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:17.749 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3737572 00:05:17.749 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:17.749 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3737572 00:05:17.749 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:17.749 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3737572 00:05:17.749 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:17.749 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3737572 00:05:17.749 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:17.749 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3737572 00:05:17.749 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:17.749 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.749 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:17.749 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.749 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:17.749 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.749 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:17.749 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3737572 00:05:17.749 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:17.749 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.749 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:17.749 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.749 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:17.749 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3737572 00:05:17.749 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:17.750 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.750 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:17.750 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3737572 00:05:17.750 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:17.750 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3737572 00:05:17.750 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:17.750 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.750 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.750 12:35:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3737572 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3737572 ']' 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3737572 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3737572 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3737572' 00:05:17.750 killing process with pid 3737572 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3737572 00:05:17.750 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3737572 00:05:18.008 00:05:18.008 real 0m1.609s 00:05:18.008 user 0m1.818s 00:05:18.008 sys 0m0.439s 00:05:18.008 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.008 12:35:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.008 ************************************ 00:05:18.008 END TEST dpdk_mem_utility 00:05:18.008 ************************************ 00:05:18.008 12:35:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.008 12:35:09 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:18.008 12:35:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.008 12:35:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.008 12:35:09 -- common/autotest_common.sh@10 -- # set +x 00:05:18.267 ************************************ 00:05:18.267 START TEST event 00:05:18.267 ************************************ 00:05:18.267 12:35:09 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:18.267 * Looking for test storage... 00:05:18.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:18.267 12:35:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:18.267 12:35:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.267 12:35:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.267 12:35:10 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:18.267 12:35:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.267 12:35:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.267 ************************************ 00:05:18.267 START TEST event_perf 00:05:18.267 ************************************ 00:05:18.267 12:35:10 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.267 Running I/O for 1 seconds...[2024-07-15 12:35:10.113171] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:18.267 [2024-07-15 12:35:10.113224] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738027 ] 00:05:18.267 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.267 [2024-07-15 12:35:10.194981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.526 [2024-07-15 12:35:10.288464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.526 [2024-07-15 12:35:10.288583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.526 [2024-07-15 12:35:10.288694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.526 [2024-07-15 12:35:10.288695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.461 Running I/O for 1 seconds... 00:05:19.461 lcore 0: 101950 00:05:19.461 lcore 1: 101952 00:05:19.461 lcore 2: 101954 00:05:19.461 lcore 3: 101953 00:05:19.461 done. 00:05:19.461 00:05:19.461 real 0m1.269s 00:05:19.461 user 0m4.166s 00:05:19.461 sys 0m0.094s 00:05:19.461 12:35:11 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.461 12:35:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.461 ************************************ 00:05:19.461 END TEST event_perf 00:05:19.461 ************************************ 00:05:19.461 12:35:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.461 12:35:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.719 12:35:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:19.719 12:35:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.719 12:35:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 ************************************ 00:05:19.720 START TEST event_reactor 00:05:19.720 ************************************ 00:05:19.720 12:35:11 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.720 [2024-07-15 12:35:11.459557] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:19.720 [2024-07-15 12:35:11.459626] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738254 ] 00:05:19.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.720 [2024-07-15 12:35:11.544567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.720 [2024-07-15 12:35:11.634650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.095 test_start 00:05:21.095 oneshot 00:05:21.095 tick 100 00:05:21.095 tick 100 00:05:21.095 tick 250 00:05:21.095 tick 100 00:05:21.095 tick 100 00:05:21.095 tick 100 00:05:21.095 tick 250 00:05:21.095 tick 500 00:05:21.095 tick 100 00:05:21.095 tick 100 00:05:21.095 tick 250 00:05:21.095 tick 100 00:05:21.095 tick 100 00:05:21.095 test_end 00:05:21.095 00:05:21.095 real 0m1.273s 00:05:21.095 user 0m1.173s 00:05:21.095 sys 0m0.094s 00:05:21.095 12:35:12 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.095 12:35:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 ************************************ 00:05:21.095 END TEST event_reactor 00:05:21.095 ************************************ 00:05:21.095 12:35:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:21.095 12:35:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.095 12:35:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:21.095 12:35:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.095 12:35:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 ************************************ 00:05:21.095 START TEST event_reactor_perf 00:05:21.095 ************************************ 00:05:21.095 12:35:12 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.096 [2024-07-15 12:35:12.803983] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:21.096 [2024-07-15 12:35:12.804042] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738479 ] 00:05:21.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.096 [2024-07-15 12:35:12.887436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.096 [2024-07-15 12:35:12.976448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.471 test_start 00:05:22.471 test_end 00:05:22.471 Performance: 313981 events per second 00:05:22.471 00:05:22.471 real 0m1.270s 00:05:22.471 user 0m1.173s 00:05:22.471 sys 0m0.091s 00:05:22.471 12:35:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.471 12:35:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.471 ************************************ 00:05:22.471 END TEST event_reactor_perf 00:05:22.471 ************************************ 00:05:22.471 12:35:14 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.471 12:35:14 event -- event/event.sh@49 -- # uname -s 00:05:22.471 12:35:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.471 12:35:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.471 12:35:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.471 12:35:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.471 12:35:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.471 ************************************ 00:05:22.471 START TEST event_scheduler 00:05:22.471 ************************************ 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.471 * Looking for test storage... 00:05:22.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:22.471 12:35:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.471 12:35:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3738784 00:05:22.471 12:35:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.471 12:35:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.471 12:35:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3738784 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3738784 ']' 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.471 12:35:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.471 [2024-07-15 12:35:14.269709] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:22.471 [2024-07-15 12:35:14.269770] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738784 ] 00:05:22.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.471 [2024-07-15 12:35:14.384650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.730 [2024-07-15 12:35:14.542387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.730 [2024-07-15 12:35:14.542540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.730 [2024-07-15 12:35:14.542610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.730 [2024-07-15 12:35:14.542620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:23.297 12:35:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.297 [2024-07-15 12:35:15.149715] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:23.297 [2024-07-15 12:35:15.149760] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.297 [2024-07-15 12:35:15.149785] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.297 [2024-07-15 12:35:15.149801] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.297 [2024-07-15 12:35:15.149817] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.297 12:35:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.297 12:35:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 [2024-07-15 12:35:15.261087] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.556 12:35:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.556 12:35:15 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.556 12:35:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 ************************************ 00:05:23.556 START TEST scheduler_create_thread 00:05:23.556 ************************************ 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 2 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 3 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 4 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 5 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 6 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 7 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 8 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 9 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.556 10 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.556 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.122 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.122 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.122 12:35:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.122 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.122 12:35:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.057 12:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.057 12:35:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:25.057 12:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.057 12:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.994 12:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.994 12:35:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:25.994 12:35:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:25.994 12:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.994 12:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.930 12:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.930 00:05:26.930 real 0m3.233s 00:05:26.930 user 0m0.025s 00:05:26.930 sys 0m0.004s 00:05:26.930 12:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.930 12:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.930 ************************************ 00:05:26.930 END TEST scheduler_create_thread 00:05:26.930 ************************************ 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:26.930 12:35:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.930 12:35:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3738784 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3738784 ']' 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3738784 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738784 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738784' 00:05:26.930 killing process with pid 3738784 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3738784 00:05:26.930 12:35:18 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3738784 00:05:27.202 [2024-07-15 12:35:18.915926] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:27.461 00:05:27.461 real 0m5.149s 00:05:27.461 user 0m10.092s 00:05:27.461 sys 0m0.480s 00:05:27.461 12:35:19 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.461 12:35:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.461 ************************************ 00:05:27.461 END TEST event_scheduler 00:05:27.461 ************************************ 00:05:27.461 12:35:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.461 12:35:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:27.461 12:35:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:27.461 12:35:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.461 12:35:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.461 12:35:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.461 ************************************ 00:05:27.461 START TEST app_repeat 00:05:27.461 ************************************ 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3739866 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3739866' 00:05:27.461 Process app_repeat pid: 3739866 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:27.461 spdk_app_start Round 0 00:05:27.461 12:35:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3739866 /var/tmp/spdk-nbd.sock 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3739866 ']' 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.461 12:35:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.461 [2024-07-15 12:35:19.393070] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:27.461 [2024-07-15 12:35:19.393132] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739866 ] 00:05:27.719 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.719 [2024-07-15 12:35:19.469571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.719 [2024-07-15 12:35:19.560933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.719 [2024-07-15 12:35:19.560938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.977 12:35:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.977 12:35:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:27.977 12:35:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.977 Malloc0 00:05:28.236 12:35:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.494 Malloc1 00:05:28.494 12:35:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.494 12:35:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.753 /dev/nbd0 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.753 1+0 records in 00:05:28.753 1+0 records out 00:05:28.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188379 s, 21.7 MB/s 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.753 /dev/nbd1 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.753 12:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.753 1+0 records in 00:05:28.753 1+0 records out 00:05:28.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224847 s, 18.2 MB/s 00:05:28.753 12:35:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.012 12:35:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.012 12:35:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.012 12:35:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.012 12:35:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.012 { 00:05:29.012 "nbd_device": "/dev/nbd0", 00:05:29.012 "bdev_name": "Malloc0" 00:05:29.012 }, 00:05:29.012 { 00:05:29.012 "nbd_device": "/dev/nbd1", 00:05:29.012 "bdev_name": "Malloc1" 00:05:29.012 } 00:05:29.012 ]' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.012 { 00:05:29.012 "nbd_device": "/dev/nbd0", 00:05:29.012 "bdev_name": "Malloc0" 00:05:29.012 }, 00:05:29.012 { 00:05:29.012 "nbd_device": "/dev/nbd1", 00:05:29.012 "bdev_name": "Malloc1" 00:05:29.012 } 00:05:29.012 ]' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.012 /dev/nbd1' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.012 /dev/nbd1' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.012 256+0 records in 00:05:29.012 256+0 records out 00:05:29.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103318 s, 101 MB/s 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.012 12:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.270 256+0 records in 00:05:29.270 256+0 records out 00:05:29.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200473 s, 52.3 MB/s 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.270 256+0 records in 00:05:29.270 256+0 records out 00:05:29.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021162 s, 49.5 MB/s 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.270 12:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.271 12:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.271 12:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.271 12:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.271 12:35:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.529 12:35:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.788 12:35:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.788 12:35:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.355 12:35:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.355 [2024-07-15 12:35:22.184068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.355 [2024-07-15 12:35:22.267916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.355 [2024-07-15 12:35:22.267922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.612 [2024-07-15 12:35:22.312394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.612 [2024-07-15 12:35:22.312439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.150 12:35:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.150 12:35:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:33.150 spdk_app_start Round 1 00:05:33.150 12:35:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3739866 /var/tmp/spdk-nbd.sock 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3739866 ']' 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.150 12:35:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.409 12:35:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.409 12:35:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:33.409 12:35:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.667 Malloc0 00:05:33.667 12:35:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.925 Malloc1 00:05:33.925 12:35:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.925 12:35:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.926 12:35:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.926 12:35:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.926 12:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.926 12:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.926 12:35:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.184 /dev/nbd0 00:05:34.184 12:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.184 12:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.184 1+0 records in 00:05:34.184 1+0 records out 00:05:34.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225293 s, 18.2 MB/s 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.184 12:35:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:34.184 12:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.184 12:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.184 12:35:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.457 /dev/nbd1 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.457 1+0 records in 00:05:34.457 1+0 records out 00:05:34.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239524 s, 17.1 MB/s 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.457 12:35:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.457 12:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.770 { 00:05:34.770 "nbd_device": "/dev/nbd0", 00:05:34.770 "bdev_name": "Malloc0" 00:05:34.770 }, 00:05:34.770 { 00:05:34.770 "nbd_device": "/dev/nbd1", 00:05:34.770 "bdev_name": "Malloc1" 00:05:34.770 } 00:05:34.770 ]' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.770 { 00:05:34.770 "nbd_device": "/dev/nbd0", 00:05:34.770 "bdev_name": "Malloc0" 00:05:34.770 }, 00:05:34.770 { 00:05:34.770 "nbd_device": "/dev/nbd1", 00:05:34.770 "bdev_name": "Malloc1" 00:05:34.770 } 00:05:34.770 ]' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.770 /dev/nbd1' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.770 /dev/nbd1' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.770 256+0 records in 00:05:34.770 256+0 records out 00:05:34.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100681 s, 104 MB/s 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.770 12:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.041 256+0 records in 00:05:35.041 256+0 records out 00:05:35.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198574 s, 52.8 MB/s 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.041 256+0 records in 00:05:35.041 256+0 records out 00:05:35.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209693 s, 50.0 MB/s 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.041 12:35:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.308 12:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.566 12:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.566 12:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.566 12:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.825 12:35:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.825 12:35:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.084 12:35:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.084 [2024-07-15 12:35:27.996901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.342 [2024-07-15 12:35:28.082912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.342 [2024-07-15 12:35:28.082916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.342 [2024-07-15 12:35:28.128372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.342 [2024-07-15 12:35:28.128418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.873 12:35:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.873 12:35:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.873 spdk_app_start Round 2 00:05:38.873 12:35:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3739866 /var/tmp/spdk-nbd.sock 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3739866 ']' 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.873 12:35:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.441 12:35:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.441 12:35:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:39.441 12:35:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.441 Malloc0 00:05:39.441 12:35:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.699 Malloc1 00:05:39.699 12:35:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.699 12:35:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.699 12:35:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.699 12:35:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.699 12:35:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.699 12:35:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.700 12:35:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.960 /dev/nbd0 00:05:39.960 12:35:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.960 12:35:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.960 1+0 records in 00:05:39.960 1+0 records out 00:05:39.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216873 s, 18.9 MB/s 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.960 12:35:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.960 12:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.960 12:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.960 12:35:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.217 /dev/nbd1 00:05:40.217 12:35:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.217 12:35:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.217 1+0 records in 00:05:40.217 1+0 records out 00:05:40.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243895 s, 16.8 MB/s 00:05:40.217 12:35:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.475 12:35:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.475 12:35:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.475 12:35:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.475 12:35:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.475 12:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.475 12:35:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.475 12:35:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.475 12:35:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.475 12:35:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.733 { 00:05:40.733 "nbd_device": "/dev/nbd0", 00:05:40.733 "bdev_name": "Malloc0" 00:05:40.733 }, 00:05:40.733 { 00:05:40.733 "nbd_device": "/dev/nbd1", 00:05:40.733 "bdev_name": "Malloc1" 00:05:40.733 } 00:05:40.733 ]' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.733 { 00:05:40.733 "nbd_device": "/dev/nbd0", 00:05:40.733 "bdev_name": "Malloc0" 00:05:40.733 }, 00:05:40.733 { 00:05:40.733 "nbd_device": "/dev/nbd1", 00:05:40.733 "bdev_name": "Malloc1" 00:05:40.733 } 00:05:40.733 ]' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.733 /dev/nbd1' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.733 /dev/nbd1' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.733 256+0 records in 00:05:40.733 256+0 records out 00:05:40.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100705 s, 104 MB/s 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.733 256+0 records in 00:05:40.733 256+0 records out 00:05:40.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200847 s, 52.2 MB/s 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.733 256+0 records in 00:05:40.733 256+0 records out 00:05:40.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213208 s, 49.2 MB/s 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.733 12:35:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.991 12:35:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.247 12:35:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.247 12:35:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.247 12:35:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.247 12:35:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.248 12:35:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.505 12:35:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.505 12:35:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.763 12:35:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.076 [2024-07-15 12:35:33.814314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.076 [2024-07-15 12:35:33.897985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.076 [2024-07-15 12:35:33.897990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.076 [2024-07-15 12:35:33.942553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.076 [2024-07-15 12:35:33.942597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.362 12:35:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3739866 /var/tmp/spdk-nbd.sock 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3739866 ']' 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.362 12:35:36 event.app_repeat -- event/event.sh@39 -- # killprocess 3739866 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3739866 ']' 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3739866 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3739866 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3739866' 00:05:45.362 killing process with pid 3739866 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3739866 00:05:45.362 12:35:36 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3739866 00:05:45.362 spdk_app_start is called in Round 0. 00:05:45.362 Shutdown signal received, stop current app iteration 00:05:45.362 Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 reinitialization... 00:05:45.362 spdk_app_start is called in Round 1. 00:05:45.362 Shutdown signal received, stop current app iteration 00:05:45.362 Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 reinitialization... 00:05:45.362 spdk_app_start is called in Round 2. 00:05:45.362 Shutdown signal received, stop current app iteration 00:05:45.362 Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 reinitialization... 00:05:45.362 spdk_app_start is called in Round 3. 00:05:45.362 Shutdown signal received, stop current app iteration 00:05:45.362 12:35:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.362 12:35:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.362 00:05:45.362 real 0m17.758s 00:05:45.362 user 0m39.515s 00:05:45.362 sys 0m2.769s 00:05:45.362 12:35:37 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.362 12:35:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.362 ************************************ 00:05:45.362 END TEST app_repeat 00:05:45.362 ************************************ 00:05:45.362 12:35:37 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.362 12:35:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.362 12:35:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.362 12:35:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.362 12:35:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.362 12:35:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.362 ************************************ 00:05:45.362 START TEST cpu_locks 00:05:45.362 ************************************ 00:05:45.362 12:35:37 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.362 * Looking for test storage... 00:05:45.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:45.362 12:35:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.362 12:35:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.362 12:35:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.362 12:35:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.362 12:35:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.362 12:35:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.362 12:35:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.621 ************************************ 00:05:45.621 START TEST default_locks 00:05:45.621 ************************************ 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3743334 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3743334 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3743334 ']' 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.621 12:35:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.621 [2024-07-15 12:35:37.379984] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:45.621 [2024-07-15 12:35:37.380045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743334 ] 00:05:45.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.621 [2024-07-15 12:35:37.461388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.621 [2024-07-15 12:35:37.552236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.557 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.557 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:46.557 12:35:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3743334 00:05:46.557 12:35:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3743334 00:05:46.557 12:35:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.817 lslocks: write error 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3743334 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3743334 ']' 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3743334 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3743334 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3743334' 00:05:46.817 killing process with pid 3743334 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3743334 00:05:46.817 12:35:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3743334 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3743334 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3743334 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3743334 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3743334 ']' 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3743334) - No such process 00:05:47.386 ERROR: process (pid: 3743334) is no longer running 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.386 00:05:47.386 real 0m1.747s 00:05:47.386 user 0m1.913s 00:05:47.386 sys 0m0.586s 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.386 12:35:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.386 ************************************ 00:05:47.386 END TEST default_locks 00:05:47.386 ************************************ 00:05:47.386 12:35:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.386 12:35:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.386 12:35:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.386 12:35:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.386 12:35:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.386 ************************************ 00:05:47.386 START TEST default_locks_via_rpc 00:05:47.386 ************************************ 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3743794 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3743794 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3743794 ']' 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.386 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.386 [2024-07-15 12:35:39.183928] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:47.386 [2024-07-15 12:35:39.183979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743794 ] 00:05:47.386 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.386 [2024-07-15 12:35:39.264307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.645 [2024-07-15 12:35:39.354025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.645 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.645 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3743794 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3743794 00:05:47.646 12:35:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3743794 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3743794 ']' 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3743794 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3743794 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3743794' 00:05:48.213 killing process with pid 3743794 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3743794 00:05:48.213 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3743794 00:05:48.780 00:05:48.780 real 0m1.325s 00:05:48.780 user 0m1.344s 00:05:48.780 sys 0m0.590s 00:05:48.781 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.781 12:35:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.781 ************************************ 00:05:48.781 END TEST default_locks_via_rpc 00:05:48.781 ************************************ 00:05:48.781 12:35:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:48.781 12:35:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.781 12:35:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.781 12:35:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.781 12:35:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.781 ************************************ 00:05:48.781 START TEST non_locking_app_on_locked_coremask 00:05:48.781 ************************************ 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3744086 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3744086 /var/tmp/spdk.sock 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744086 ']' 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.781 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.781 [2024-07-15 12:35:40.576269] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:48.781 [2024-07-15 12:35:40.576321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744086 ] 00:05:48.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.781 [2024-07-15 12:35:40.654966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.039 [2024-07-15 12:35:40.748316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.039 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3744094 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3744094 /var/tmp/spdk2.sock 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744094 ']' 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.040 12:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.298 [2024-07-15 12:35:41.017840] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:49.298 [2024-07-15 12:35:41.017910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744094 ] 00:05:49.298 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.298 [2024-07-15 12:35:41.128724] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.298 [2024-07-15 12:35:41.128753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.556 [2024-07-15 12:35:41.308075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.122 12:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.122 12:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.122 12:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3744086 00:05:50.122 12:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3744086 00:05:50.122 12:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.058 lslocks: write error 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3744086 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744086 ']' 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744086 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744086 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744086' 00:05:51.058 killing process with pid 3744086 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3744086 00:05:51.058 12:35:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3744086 00:05:51.643 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3744094 00:05:51.643 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744094 ']' 00:05:51.643 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744094 00:05:51.643 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.643 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744094 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744094' 00:05:51.900 killing process with pid 3744094 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3744094 00:05:51.900 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3744094 00:05:52.158 00:05:52.158 real 0m3.442s 00:05:52.158 user 0m3.738s 00:05:52.158 sys 0m1.126s 00:05:52.158 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.158 12:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.158 ************************************ 00:05:52.158 END TEST non_locking_app_on_locked_coremask 00:05:52.158 ************************************ 00:05:52.158 12:35:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.158 12:35:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.158 12:35:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.158 12:35:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.158 12:35:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.158 ************************************ 00:05:52.158 START TEST locking_app_on_unlocked_coremask 00:05:52.158 ************************************ 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3744653 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3744653 /var/tmp/spdk.sock 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744653 ']' 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.158 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.158 [2024-07-15 12:35:44.081482] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:52.158 [2024-07-15 12:35:44.081523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744653 ] 00:05:52.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.417 [2024-07-15 12:35:44.150912] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.417 [2024-07-15 12:35:44.150944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.417 [2024-07-15 12:35:44.233084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3744915 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3744915 /var/tmp/spdk2.sock 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744915 ']' 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.353 12:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.353 [2024-07-15 12:35:44.992839] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:53.353 [2024-07-15 12:35:44.992903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744915 ] 00:05:53.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.353 [2024-07-15 12:35:45.102942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.353 [2024-07-15 12:35:45.279062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.288 12:35:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.288 12:35:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.288 12:35:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3744915 00:05:54.288 12:35:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3744915 00:05:54.288 12:35:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.547 lslocks: write error 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3744653 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744653 ']' 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744653 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744653 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744653' 00:05:54.547 killing process with pid 3744653 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3744653 00:05:54.547 12:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3744653 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3744915 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744915 ']' 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744915 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744915 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744915' 00:05:55.481 killing process with pid 3744915 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3744915 00:05:55.481 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3744915 00:05:55.739 00:05:55.739 real 0m3.463s 00:05:55.739 user 0m3.841s 00:05:55.739 sys 0m1.014s 00:05:55.739 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.739 12:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.739 ************************************ 00:05:55.739 END TEST locking_app_on_unlocked_coremask 00:05:55.739 ************************************ 00:05:55.739 12:35:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.739 12:35:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.739 12:35:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.739 12:35:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.739 12:35:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.739 ************************************ 00:05:55.739 START TEST locking_app_on_locked_coremask 00:05:55.740 ************************************ 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3745398 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3745398 /var/tmp/spdk.sock 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745398 ']' 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.740 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.740 [2024-07-15 12:35:47.617096] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:55.740 [2024-07-15 12:35:47.617153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745398 ] 00:05:55.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.998 [2024-07-15 12:35:47.697135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.998 [2024-07-15 12:35:47.787384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.256 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.256 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.256 12:35:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3745479 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3745479 /var/tmp/spdk2.sock 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3745479 /var/tmp/spdk2.sock 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3745479 /var/tmp/spdk2.sock 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745479 ']' 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.256 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.256 [2024-07-15 12:35:48.056384] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:56.256 [2024-07-15 12:35:48.056445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745479 ] 00:05:56.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.256 [2024-07-15 12:35:48.162007] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3745398 has claimed it. 00:05:56.256 [2024-07-15 12:35:48.162048] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3745479) - No such process 00:05:56.822 ERROR: process (pid: 3745479) is no longer running 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3745398 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3745398 00:05:56.822 12:35:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.388 lslocks: write error 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3745398 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3745398 ']' 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3745398 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745398 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745398' 00:05:57.388 killing process with pid 3745398 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3745398 00:05:57.388 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3745398 00:05:57.646 00:05:57.646 real 0m1.846s 00:05:57.646 user 0m2.049s 00:05:57.646 sys 0m0.613s 00:05:57.646 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.646 12:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 END TEST locking_app_on_locked_coremask 00:05:57.646 ************************************ 00:05:57.646 12:35:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:57.646 12:35:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.647 12:35:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.647 12:35:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.647 12:35:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.647 ************************************ 00:05:57.647 START TEST locking_overlapped_coremask 00:05:57.647 ************************************ 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3745773 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3745773 /var/tmp/spdk.sock 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745773 ']' 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.647 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.647 [2024-07-15 12:35:49.536300] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:57.647 [2024-07-15 12:35:49.536356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745773 ] 00:05:57.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.905 [2024-07-15 12:35:49.617606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.905 [2024-07-15 12:35:49.702291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.905 [2024-07-15 12:35:49.702403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.905 [2024-07-15 12:35:49.702405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3745783 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3745783 /var/tmp/spdk2.sock 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3745783 /var/tmp/spdk2.sock 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3745783 /var/tmp/spdk2.sock 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745783 ']' 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.164 12:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.164 [2024-07-15 12:35:49.980271] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:58.164 [2024-07-15 12:35:49.980329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745783 ] 00:05:58.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.422 [2024-07-15 12:35:50.180182] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3745773 has claimed it. 00:05:58.422 [2024-07-15 12:35:50.180279] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3745783) - No such process 00:05:58.990 ERROR: process (pid: 3745783) is no longer running 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3745773 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3745773 ']' 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3745773 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745773 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745773' 00:05:58.990 killing process with pid 3745773 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3745773 00:05:58.990 12:35:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3745773 00:05:59.249 00:05:59.249 real 0m1.606s 00:05:59.249 user 0m4.327s 00:05:59.249 sys 0m0.478s 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.249 ************************************ 00:05:59.249 END TEST locking_overlapped_coremask 00:05:59.249 ************************************ 00:05:59.249 12:35:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.249 12:35:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.249 12:35:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.249 12:35:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.249 12:35:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.249 ************************************ 00:05:59.249 START TEST locking_overlapped_coremask_via_rpc 00:05:59.249 ************************************ 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3746071 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3746071 /var/tmp/spdk.sock 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746071 ']' 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.249 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.508 [2024-07-15 12:35:51.203697] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:59.508 [2024-07-15 12:35:51.203737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746071 ] 00:05:59.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.508 [2024-07-15 12:35:51.273878] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.508 [2024-07-15 12:35:51.273913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.508 [2024-07-15 12:35:51.357887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.508 [2024-07-15 12:35:51.358000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.508 [2024-07-15 12:35:51.358000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3746085 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3746085 /var/tmp/spdk2.sock 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746085 ']' 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.766 12:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.766 [2024-07-15 12:35:51.635245] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:05:59.767 [2024-07-15 12:35:51.635309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746085 ] 00:05:59.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.025 [2024-07-15 12:35:51.827770] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.025 [2024-07-15 12:35:51.827829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.283 [2024-07-15 12:35:52.129537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.283 [2024-07-15 12:35:52.129654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.283 [2024-07-15 12:35:52.129659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.850 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.850 [2024-07-15 12:35:52.662447] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3746071 has claimed it. 00:06:00.850 request: 00:06:00.850 { 00:06:00.850 "method": "framework_enable_cpumask_locks", 00:06:00.850 "req_id": 1 00:06:00.850 } 00:06:00.850 Got JSON-RPC error response 00:06:00.850 response: 00:06:00.850 { 00:06:00.850 "code": -32603, 00:06:00.851 "message": "Failed to claim CPU core: 2" 00:06:00.851 } 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3746071 /var/tmp/spdk.sock 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746071 ']' 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.851 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3746085 /var/tmp/spdk2.sock 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746085 ']' 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.109 12:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.367 00:06:01.367 real 0m2.031s 00:06:01.367 user 0m1.075s 00:06:01.367 sys 0m0.168s 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.367 12:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.367 ************************************ 00:06:01.367 END TEST locking_overlapped_coremask_via_rpc 00:06:01.367 ************************************ 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.367 12:35:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.367 12:35:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3746071 ]] 00:06:01.367 12:35:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3746071 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746071 ']' 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746071 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746071 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746071' 00:06:01.367 killing process with pid 3746071 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3746071 00:06:01.367 12:35:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3746071 00:06:01.934 12:35:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3746085 ]] 00:06:01.934 12:35:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3746085 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746085 ']' 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746085 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746085 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746085' 00:06:01.934 killing process with pid 3746085 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3746085 00:06:01.934 12:35:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3746085 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3746071 ]] 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3746071 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746071 ']' 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746071 00:06:02.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3746071) - No such process 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3746071 is not found' 00:06:02.501 Process with pid 3746071 is not found 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3746085 ]] 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3746085 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746085 ']' 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746085 00:06:02.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3746085) - No such process 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3746085 is not found' 00:06:02.501 Process with pid 3746085 is not found 00:06:02.501 12:35:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.501 00:06:02.501 real 0m17.069s 00:06:02.501 user 0m29.369s 00:06:02.501 sys 0m5.612s 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.501 12:35:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 ************************************ 00:06:02.501 END TEST cpu_locks 00:06:02.501 ************************************ 00:06:02.501 12:35:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.501 00:06:02.501 real 0m44.320s 00:06:02.501 user 1m25.679s 00:06:02.501 sys 0m9.519s 00:06:02.501 12:35:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.501 12:35:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 ************************************ 00:06:02.501 END TEST event 00:06:02.501 ************************************ 00:06:02.501 12:35:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.501 12:35:54 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.501 12:35:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.501 12:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.501 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 ************************************ 00:06:02.501 START TEST thread 00:06:02.501 ************************************ 00:06:02.501 12:35:54 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.759 * Looking for test storage... 00:06:02.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.759 12:35:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.759 12:35:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.759 12:35:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.759 12:35:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.759 ************************************ 00:06:02.759 START TEST thread_poller_perf 00:06:02.759 ************************************ 00:06:02.759 12:35:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.759 [2024-07-15 12:35:54.514607] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:02.759 [2024-07-15 12:35:54.514676] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746721 ] 00:06:02.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.759 [2024-07-15 12:35:54.595531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.759 [2024-07-15 12:35:54.687535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.759 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.136 ====================================== 00:06:04.136 busy:2206929474 (cyc) 00:06:04.136 total_run_count: 255000 00:06:04.136 tsc_hz: 2200000000 (cyc) 00:06:04.136 ====================================== 00:06:04.136 poller_cost: 8654 (cyc), 3933 (nsec) 00:06:04.136 00:06:04.136 real 0m1.282s 00:06:04.136 user 0m1.189s 00:06:04.136 sys 0m0.087s 00:06:04.136 12:35:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.136 12:35:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.136 ************************************ 00:06:04.136 END TEST thread_poller_perf 00:06:04.136 ************************************ 00:06:04.136 12:35:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.136 12:35:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.136 12:35:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.136 12:35:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.136 12:35:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.136 ************************************ 00:06:04.136 START TEST thread_poller_perf 00:06:04.136 ************************************ 00:06:04.136 12:35:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.136 [2024-07-15 12:35:55.861670] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:04.136 [2024-07-15 12:35:55.861727] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746987 ] 00:06:04.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.136 [2024-07-15 12:35:55.942443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.136 [2024-07-15 12:35:56.032119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.136 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.511 ====================================== 00:06:05.511 busy:2202081440 (cyc) 00:06:05.511 total_run_count: 3379000 00:06:05.511 tsc_hz: 2200000000 (cyc) 00:06:05.511 ====================================== 00:06:05.511 poller_cost: 651 (cyc), 295 (nsec) 00:06:05.511 00:06:05.511 real 0m1.267s 00:06:05.511 user 0m1.177s 00:06:05.511 sys 0m0.086s 00:06:05.511 12:35:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.511 12:35:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.511 ************************************ 00:06:05.511 END TEST thread_poller_perf 00:06:05.511 ************************************ 00:06:05.511 12:35:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.511 12:35:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.511 00:06:05.511 real 0m2.779s 00:06:05.511 user 0m2.455s 00:06:05.511 sys 0m0.331s 00:06:05.511 12:35:57 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.511 12:35:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.511 ************************************ 00:06:05.511 END TEST thread 00:06:05.511 ************************************ 00:06:05.511 12:35:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.511 12:35:57 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.511 12:35:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.511 12:35:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.511 12:35:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.511 ************************************ 00:06:05.511 START TEST accel 00:06:05.511 ************************************ 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.511 * Looking for test storage... 00:06:05.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:05.511 12:35:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:05.511 12:35:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:05.511 12:35:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.511 12:35:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3747310 00:06:05.511 12:35:57 accel -- accel/accel.sh@63 -- # waitforlisten 3747310 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@829 -- # '[' -z 3747310 ']' 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.511 12:35:57 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.511 12:35:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.511 12:35:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.511 12:35:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.511 12:35:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.511 12:35:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.511 12:35:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.511 12:35:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.511 12:35:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.511 12:35:57 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.511 [2024-07-15 12:35:57.364035] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:05.511 [2024-07-15 12:35:57.364100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747310 ] 00:06:05.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.511 [2024-07-15 12:35:57.443382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.769 [2024-07-15 12:35:57.535186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.706 12:35:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.706 12:35:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.706 12:35:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.706 12:35:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.706 12:35:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.706 12:35:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.706 12:35:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.706 12:35:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.706 12:35:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.706 12:35:58 accel -- accel/accel.sh@75 -- # killprocess 3747310 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@948 -- # '[' -z 3747310 ']' 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@952 -- # kill -0 3747310 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@953 -- # uname 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747310 00:06:06.706 12:35:58 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.707 12:35:58 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.707 12:35:58 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747310' 00:06:06.707 killing process with pid 3747310 00:06:06.707 12:35:58 accel -- common/autotest_common.sh@967 -- # kill 3747310 00:06:06.707 12:35:58 accel -- common/autotest_common.sh@972 -- # wait 3747310 00:06:07.039 12:35:58 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:07.039 12:35:58 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.039 12:35:58 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:07.039 12:35:58 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:07.039 12:35:58 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.039 12:35:58 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.039 12:35:58 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.039 12:35:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.039 ************************************ 00:06:07.039 START TEST accel_missing_filename 00:06:07.039 ************************************ 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.039 12:35:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:07.039 12:35:58 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:07.039 [2024-07-15 12:35:58.891286] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:07.039 [2024-07-15 12:35:58.891355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747661 ] 00:06:07.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.039 [2024-07-15 12:35:58.975105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.309 [2024-07-15 12:35:59.067582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.309 [2024-07-15 12:35:59.112853] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.309 [2024-07-15 12:35:59.175721] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:07.568 A filename is required. 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.568 00:06:07.568 real 0m0.394s 00:06:07.568 user 0m0.291s 00:06:07.568 sys 0m0.141s 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.568 12:35:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:07.568 ************************************ 00:06:07.568 END TEST accel_missing_filename 00:06:07.568 ************************************ 00:06:07.568 12:35:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.568 12:35:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.568 12:35:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.568 12:35:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.568 12:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.568 ************************************ 00:06:07.568 START TEST accel_compress_verify 00:06:07.568 ************************************ 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.568 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.568 12:35:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.568 [2024-07-15 12:35:59.355977] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:07.568 [2024-07-15 12:35:59.356042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747883 ] 00:06:07.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.568 [2024-07-15 12:35:59.439373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.827 [2024-07-15 12:35:59.530498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.827 [2024-07-15 12:35:59.575784] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.827 [2024-07-15 12:35:59.638686] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:07.827 00:06:07.827 Compression does not support the verify option, aborting. 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.827 00:06:07.827 real 0m0.392s 00:06:07.827 user 0m0.296s 00:06:07.827 sys 0m0.140s 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.827 12:35:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.827 ************************************ 00:06:07.827 END TEST accel_compress_verify 00:06:07.827 ************************************ 00:06:07.827 12:35:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.828 12:35:59 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:07.828 12:35:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.828 12:35:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.828 12:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.086 ************************************ 00:06:08.086 START TEST accel_wrong_workload 00:06:08.086 ************************************ 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.086 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:08.086 12:35:59 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:08.086 Unsupported workload type: foobar 00:06:08.086 [2024-07-15 12:35:59.815260] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:08.086 accel_perf options: 00:06:08.086 [-h help message] 00:06:08.087 [-q queue depth per core] 00:06:08.087 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.087 [-T number of threads per core 00:06:08.087 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.087 [-t time in seconds] 00:06:08.087 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.087 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.087 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.087 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.087 [-S for crc32c workload, use this seed value (default 0) 00:06:08.087 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.087 [-f for fill workload, use this BYTE value (default 255) 00:06:08.087 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.087 [-y verify result if this switch is on] 00:06:08.087 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.087 Can be used to spread operations across a wider range of memory. 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.087 00:06:08.087 real 0m0.034s 00:06:08.087 user 0m0.019s 00:06:08.087 sys 0m0.014s 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.087 12:35:59 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:08.087 ************************************ 00:06:08.087 END TEST accel_wrong_workload 00:06:08.087 ************************************ 00:06:08.087 Error: writing output failed: Broken pipe 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.087 12:35:59 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.087 ************************************ 00:06:08.087 START TEST accel_negative_buffers 00:06:08.087 ************************************ 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:08.087 12:35:59 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:08.087 -x option must be non-negative. 00:06:08.087 [2024-07-15 12:35:59.918006] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:08.087 accel_perf options: 00:06:08.087 [-h help message] 00:06:08.087 [-q queue depth per core] 00:06:08.087 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.087 [-T number of threads per core 00:06:08.087 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.087 [-t time in seconds] 00:06:08.087 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.087 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.087 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.087 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.087 [-S for crc32c workload, use this seed value (default 0) 00:06:08.087 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.087 [-f for fill workload, use this BYTE value (default 255) 00:06:08.087 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.087 [-y verify result if this switch is on] 00:06:08.087 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.087 Can be used to spread operations across a wider range of memory. 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.087 00:06:08.087 real 0m0.033s 00:06:08.087 user 0m0.020s 00:06:08.087 sys 0m0.013s 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.087 12:35:59 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:08.087 ************************************ 00:06:08.087 END TEST accel_negative_buffers 00:06:08.087 ************************************ 00:06:08.087 Error: writing output failed: Broken pipe 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.087 12:35:59 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.087 12:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.087 ************************************ 00:06:08.087 START TEST accel_crc32c 00:06:08.087 ************************************ 00:06:08.087 12:35:59 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.087 12:35:59 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.087 [2024-07-15 12:36:00.025318] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:08.087 [2024-07-15 12:36:00.025380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747951 ] 00:06:08.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.346 [2024-07-15 12:36:00.108210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.346 [2024-07-15 12:36:00.200307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.346 12:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:09.722 12:36:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.722 00:06:09.722 real 0m1.402s 00:06:09.722 user 0m1.270s 00:06:09.722 sys 0m0.142s 00:06:09.722 12:36:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.722 12:36:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:09.722 ************************************ 00:06:09.722 END TEST accel_crc32c 00:06:09.722 ************************************ 00:06:09.722 12:36:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.722 12:36:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:09.722 12:36:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:09.722 12:36:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.722 12:36:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.722 ************************************ 00:06:09.722 START TEST accel_crc32c_C2 00:06:09.722 ************************************ 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.722 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:09.722 [2024-07-15 12:36:01.493189] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:09.722 [2024-07-15 12:36:01.493280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748309 ] 00:06:09.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.722 [2024-07-15 12:36:01.574302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.981 [2024-07-15 12:36:01.664324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.981 12:36:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.920 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.179 00:06:11.179 real 0m1.396s 00:06:11.179 user 0m1.283s 00:06:11.179 sys 0m0.125s 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.179 12:36:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:11.179 ************************************ 00:06:11.179 END TEST accel_crc32c_C2 00:06:11.179 ************************************ 00:06:11.179 12:36:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.179 12:36:02 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:11.179 12:36:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.179 12:36:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.179 12:36:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.179 ************************************ 00:06:11.179 START TEST accel_copy 00:06:11.179 ************************************ 00:06:11.179 12:36:02 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:11.179 12:36:02 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:11.179 [2024-07-15 12:36:02.952149] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:11.179 [2024-07-15 12:36:02.952198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748644 ] 00:06:11.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.179 [2024-07-15 12:36:03.032393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.439 [2024-07-15 12:36:03.120319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.439 12:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:12.381 12:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.381 00:06:12.381 real 0m1.388s 00:06:12.381 user 0m1.264s 00:06:12.381 sys 0m0.135s 00:06:12.381 12:36:04 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.381 12:36:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.381 ************************************ 00:06:12.381 END TEST accel_copy 00:06:12.381 ************************************ 00:06:12.639 12:36:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.639 12:36:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.639 12:36:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.639 12:36:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.639 12:36:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.639 ************************************ 00:06:12.639 START TEST accel_fill 00:06:12.639 ************************************ 00:06:12.639 12:36:04 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:12.639 12:36:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:12.639 [2024-07-15 12:36:04.408888] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:12.639 [2024-07-15 12:36:04.408953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748922 ] 00:06:12.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.639 [2024-07-15 12:36:04.489463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.639 [2024-07-15 12:36:04.577119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.898 12:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.835 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:13.836 12:36:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.836 00:06:13.836 real 0m1.391s 00:06:13.836 user 0m1.271s 00:06:13.836 sys 0m0.131s 00:06:13.836 12:36:05 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.836 12:36:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:13.836 ************************************ 00:06:13.836 END TEST accel_fill 00:06:13.836 ************************************ 00:06:14.094 12:36:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.094 12:36:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:14.094 12:36:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.094 12:36:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.094 12:36:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.094 ************************************ 00:06:14.094 START TEST accel_copy_crc32c 00:06:14.094 ************************************ 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.094 12:36:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.094 [2024-07-15 12:36:05.861898] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:14.094 [2024-07-15 12:36:05.861951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749207 ] 00:06:14.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.094 [2024-07-15 12:36:05.941862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.094 [2024-07-15 12:36:06.029875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.353 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.354 12:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.289 00:06:15.289 real 0m1.388s 00:06:15.289 user 0m1.268s 00:06:15.289 sys 0m0.132s 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.289 12:36:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.289 ************************************ 00:06:15.289 END TEST accel_copy_crc32c 00:06:15.289 ************************************ 00:06:15.548 12:36:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.548 12:36:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.548 12:36:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.548 12:36:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.548 12:36:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.548 ************************************ 00:06:15.548 START TEST accel_copy_crc32c_C2 00:06:15.548 ************************************ 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.548 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:15.548 [2024-07-15 12:36:07.318318] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:15.548 [2024-07-15 12:36:07.318388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749484 ] 00:06:15.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.548 [2024-07-15 12:36:07.399874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.807 [2024-07-15 12:36:07.489829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.807 12:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.741 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.741 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.741 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.741 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.741 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.999 00:06:16.999 real 0m1.395s 00:06:16.999 user 0m1.265s 00:06:16.999 sys 0m0.141s 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.999 12:36:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 ************************************ 00:06:16.999 END TEST accel_copy_crc32c_C2 00:06:16.999 ************************************ 00:06:16.999 12:36:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.999 12:36:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:16.999 12:36:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.999 12:36:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.999 12:36:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 ************************************ 00:06:16.999 START TEST accel_dualcast 00:06:16.999 ************************************ 00:06:16.999 12:36:08 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:16.999 12:36:08 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:16.999 [2024-07-15 12:36:08.782001] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:16.999 [2024-07-15 12:36:08.782067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749960 ] 00:06:16.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.999 [2024-07-15 12:36:08.864430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.257 [2024-07-15 12:36:08.953442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.257 12:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.207 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:18.464 12:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.464 00:06:18.464 real 0m1.394s 00:06:18.464 user 0m1.265s 00:06:18.464 sys 0m0.139s 00:06:18.464 12:36:10 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.464 12:36:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:18.464 ************************************ 00:06:18.465 END TEST accel_dualcast 00:06:18.465 ************************************ 00:06:18.465 12:36:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.465 12:36:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:18.465 12:36:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.465 12:36:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.465 12:36:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.465 ************************************ 00:06:18.465 START TEST accel_compare 00:06:18.465 ************************************ 00:06:18.465 12:36:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:18.465 12:36:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:18.465 [2024-07-15 12:36:10.242287] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:18.465 [2024-07-15 12:36:10.242349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750434 ] 00:06:18.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.465 [2024-07-15 12:36:10.325912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.723 [2024-07-15 12:36:10.413305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.723 12:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.098 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:20.099 12:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.099 00:06:20.099 real 0m1.393s 00:06:20.099 user 0m1.262s 00:06:20.099 sys 0m0.142s 00:06:20.099 12:36:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.099 12:36:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:20.099 ************************************ 00:06:20.099 END TEST accel_compare 00:06:20.099 ************************************ 00:06:20.099 12:36:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.099 12:36:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:20.099 12:36:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.099 12:36:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.099 12:36:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.099 ************************************ 00:06:20.099 START TEST accel_xor 00:06:20.099 ************************************ 00:06:20.099 12:36:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:20.099 [2024-07-15 12:36:11.703469] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:20.099 [2024-07-15 12:36:11.703523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750715 ] 00:06:20.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.099 [2024-07-15 12:36:11.785499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.099 [2024-07-15 12:36:11.877881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.099 12:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.475 00:06:21.475 real 0m1.398s 00:06:21.475 user 0m1.266s 00:06:21.475 sys 0m0.144s 00:06:21.475 12:36:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.475 12:36:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:21.475 ************************************ 00:06:21.475 END TEST accel_xor 00:06:21.475 ************************************ 00:06:21.475 12:36:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.475 12:36:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:21.475 12:36:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.475 12:36:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.475 12:36:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.475 ************************************ 00:06:21.475 START TEST accel_xor 00:06:21.475 ************************************ 00:06:21.475 12:36:13 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.475 [2024-07-15 12:36:13.165930] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:21.475 [2024-07-15 12:36:13.165987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751001 ] 00:06:21.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.475 [2024-07-15 12:36:13.247228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.475 [2024-07-15 12:36:13.338171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.475 12:36:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 12:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:22.852 12:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.852 00:06:22.852 real 0m1.392s 00:06:22.852 user 0m1.275s 00:06:22.852 sys 0m0.128s 00:06:22.852 12:36:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.852 12:36:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:22.852 ************************************ 00:06:22.852 END TEST accel_xor 00:06:22.852 ************************************ 00:06:22.852 12:36:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.852 12:36:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:22.852 12:36:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.852 12:36:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.852 12:36:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.852 ************************************ 00:06:22.852 START TEST accel_dif_verify 00:06:22.852 ************************************ 00:06:22.852 12:36:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:22.852 12:36:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:22.852 [2024-07-15 12:36:14.633137] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:22.852 [2024-07-15 12:36:14.633194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751280 ] 00:06:22.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.852 [2024-07-15 12:36:14.714793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.112 [2024-07-15 12:36:14.803082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.112 12:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:24.490 12:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.490 00:06:24.490 real 0m1.395s 00:06:24.490 user 0m1.268s 00:06:24.490 sys 0m0.139s 00:06:24.490 12:36:15 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.490 12:36:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.490 ************************************ 00:06:24.490 END TEST accel_dif_verify 00:06:24.490 ************************************ 00:06:24.490 12:36:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.490 12:36:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.490 12:36:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.490 12:36:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.490 12:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.490 ************************************ 00:06:24.490 START TEST accel_dif_generate 00:06:24.490 ************************************ 00:06:24.490 12:36:16 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:24.490 [2024-07-15 12:36:16.095268] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:24.490 [2024-07-15 12:36:16.095326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751563 ] 00:06:24.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.490 [2024-07-15 12:36:16.177312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.490 [2024-07-15 12:36:16.267266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:24.490 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.491 12:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:25.869 12:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.869 00:06:25.869 real 0m1.394s 00:06:25.869 user 0m1.276s 00:06:25.869 sys 0m0.132s 00:06:25.869 12:36:17 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.869 12:36:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:25.869 ************************************ 00:06:25.869 END TEST accel_dif_generate 00:06:25.869 ************************************ 00:06:25.869 12:36:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.869 12:36:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:25.869 12:36:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.869 12:36:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.869 12:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.869 ************************************ 00:06:25.869 START TEST accel_dif_generate_copy 00:06:25.869 ************************************ 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.869 [2024-07-15 12:36:17.555517] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:25.869 [2024-07-15 12:36:17.555569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751845 ] 00:06:25.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.869 [2024-07-15 12:36:17.637662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.869 [2024-07-15 12:36:17.726537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.869 12:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.248 00:06:27.248 real 0m1.393s 00:06:27.248 user 0m1.261s 00:06:27.248 sys 0m0.145s 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.248 12:36:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.248 ************************************ 00:06:27.248 END TEST accel_dif_generate_copy 00:06:27.248 ************************************ 00:06:27.248 12:36:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.248 12:36:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:27.248 12:36:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.248 12:36:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.248 12:36:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.248 12:36:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.248 ************************************ 00:06:27.248 START TEST accel_comp 00:06:27.248 ************************************ 00:06:27.248 12:36:18 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:27.248 12:36:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:27.248 [2024-07-15 12:36:19.014191] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:27.248 [2024-07-15 12:36:19.014243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752128 ] 00:06:27.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.248 [2024-07-15 12:36:19.095999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.248 [2024-07-15 12:36:19.184195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.507 12:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:28.445 12:36:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.445 00:06:28.445 real 0m1.395s 00:06:28.445 user 0m1.266s 00:06:28.445 sys 0m0.141s 00:06:28.445 12:36:20 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.704 12:36:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:28.704 ************************************ 00:06:28.704 END TEST accel_comp 00:06:28.704 ************************************ 00:06:28.704 12:36:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.704 12:36:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.704 12:36:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:28.704 12:36:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.704 12:36:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.704 ************************************ 00:06:28.704 START TEST accel_decomp 00:06:28.704 ************************************ 00:06:28.704 12:36:20 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:28.704 12:36:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:28.704 [2024-07-15 12:36:20.478132] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:28.704 [2024-07-15 12:36:20.478199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752407 ] 00:06:28.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.704 [2024-07-15 12:36:20.563669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.962 [2024-07-15 12:36:20.655195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.962 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.962 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.962 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.962 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.962 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.963 12:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.928 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.929 12:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.929 00:06:29.929 real 0m1.403s 00:06:29.929 user 0m1.270s 00:06:29.929 sys 0m0.144s 00:06:29.929 12:36:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.929 12:36:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:29.929 ************************************ 00:06:29.929 END TEST accel_decomp 00:06:29.929 ************************************ 00:06:30.187 12:36:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.187 12:36:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.187 12:36:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:30.187 12:36:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.187 12:36:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.187 ************************************ 00:06:30.187 START TEST accel_decomp_full 00:06:30.187 ************************************ 00:06:30.187 12:36:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:30.187 12:36:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:30.187 [2024-07-15 12:36:21.945769] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:30.187 [2024-07-15 12:36:21.945828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752690 ] 00:06:30.187 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.187 [2024-07-15 12:36:22.028840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.187 [2024-07-15 12:36:22.119005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.446 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.447 12:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.824 12:36:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.824 00:06:31.824 real 0m1.407s 00:06:31.824 user 0m1.283s 00:06:31.824 sys 0m0.137s 00:06:31.824 12:36:23 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.824 12:36:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:31.824 ************************************ 00:06:31.824 END TEST accel_decomp_full 00:06:31.824 ************************************ 00:06:31.824 12:36:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.824 12:36:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.824 12:36:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.825 12:36:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.825 12:36:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.825 ************************************ 00:06:31.825 START TEST accel_decomp_mcore 00:06:31.825 ************************************ 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:31.825 [2024-07-15 12:36:23.425408] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:31.825 [2024-07-15 12:36:23.425476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752981 ] 00:06:31.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.825 [2024-07-15 12:36:23.508483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.825 [2024-07-15 12:36:23.602586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.825 [2024-07-15 12:36:23.602696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.825 [2024-07-15 12:36:23.602828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.825 [2024-07-15 12:36:23.602829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.825 12:36:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.200 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.201 00:06:33.201 real 0m1.419s 00:06:33.201 user 0m4.643s 00:06:33.201 sys 0m0.157s 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.201 12:36:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:33.201 ************************************ 00:06:33.201 END TEST accel_decomp_mcore 00:06:33.201 ************************************ 00:06:33.201 12:36:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.201 12:36:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.201 12:36:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:33.201 12:36:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.201 12:36:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.201 ************************************ 00:06:33.201 START TEST accel_decomp_full_mcore 00:06:33.201 ************************************ 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.201 12:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.201 [2024-07-15 12:36:24.910279] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:33.201 [2024-07-15 12:36:24.910333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753269 ] 00:06:33.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.201 [2024-07-15 12:36:24.992233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.201 [2024-07-15 12:36:25.087776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.201 [2024-07-15 12:36:25.087803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.201 [2024-07-15 12:36:25.087915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.201 [2024-07-15 12:36:25.087916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.201 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.459 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.460 12:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.394 00:06:34.394 real 0m1.442s 00:06:34.394 user 0m4.737s 00:06:34.394 sys 0m0.156s 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.394 12:36:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:34.394 ************************************ 00:06:34.394 END TEST accel_decomp_full_mcore 00:06:34.394 ************************************ 00:06:34.652 12:36:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.652 12:36:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.652 12:36:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.652 12:36:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.652 12:36:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.652 ************************************ 00:06:34.652 START TEST accel_decomp_mthread 00:06:34.652 ************************************ 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:34.652 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:34.652 [2024-07-15 12:36:26.421812] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:34.652 [2024-07-15 12:36:26.421865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753595 ] 00:06:34.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.652 [2024-07-15 12:36:26.502816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.910 [2024-07-15 12:36:26.593312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.910 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.911 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 12:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.286 00:06:36.286 real 0m1.403s 00:06:36.286 user 0m1.276s 00:06:36.286 sys 0m0.139s 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.286 12:36:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:36.286 ************************************ 00:06:36.286 END TEST accel_decomp_mthread 00:06:36.286 ************************************ 00:06:36.286 12:36:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.286 12:36:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.286 12:36:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:36.286 12:36:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.286 12:36:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.286 ************************************ 00:06:36.286 START TEST accel_decomp_full_mthread 00:06:36.286 ************************************ 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.286 12:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.286 [2024-07-15 12:36:27.888135] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:36.286 [2024-07-15 12:36:27.888193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753881 ] 00:06:36.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.286 [2024-07-15 12:36:27.970954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.286 [2024-07-15 12:36:28.061959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.286 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.287 12:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.660 00:06:37.660 real 0m1.428s 00:06:37.660 user 0m1.296s 00:06:37.660 sys 0m0.139s 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.660 12:36:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:37.660 ************************************ 00:06:37.660 END TEST accel_decomp_full_mthread 00:06:37.660 ************************************ 00:06:37.660 12:36:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.660 12:36:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.660 12:36:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.660 12:36:29 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:37.660 12:36:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.660 12:36:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.660 12:36:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:37.660 12:36:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.660 12:36:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.660 12:36:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.660 12:36:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.660 12:36:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.660 12:36:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.660 12:36:29 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.660 ************************************ 00:06:37.660 START TEST accel_dif_functional_tests 00:06:37.660 ************************************ 00:06:37.660 12:36:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.660 [2024-07-15 12:36:29.404468] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:37.660 [2024-07-15 12:36:29.404518] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754194 ] 00:06:37.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.660 [2024-07-15 12:36:29.483635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.660 [2024-07-15 12:36:29.574284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.660 [2024-07-15 12:36:29.574324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.660 [2024-07-15 12:36:29.574326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.920 00:06:37.920 00:06:37.920 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.920 http://cunit.sourceforge.net/ 00:06:37.920 00:06:37.920 00:06:37.920 Suite: accel_dif 00:06:37.920 Test: verify: DIF generated, GUARD check ...passed 00:06:37.920 Test: verify: DIF generated, APPTAG check ...passed 00:06:37.920 Test: verify: DIF generated, REFTAG check ...passed 00:06:37.920 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:36:29.648535] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.920 passed 00:06:37.920 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:36:29.648602] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.920 passed 00:06:37.920 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:36:29.648636] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.920 passed 00:06:37.920 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:37.920 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:36:29.648703] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:37.920 passed 00:06:37.920 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:37.920 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:37.920 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:37.920 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:36:29.648855] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:37.920 passed 00:06:37.920 Test: verify copy: DIF generated, GUARD check ...passed 00:06:37.920 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:37.920 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:37.920 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:36:29.649021] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.920 passed 00:06:37.920 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:36:29.649054] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.920 passed 00:06:37.920 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:36:29.649087] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.920 passed 00:06:37.920 Test: generate copy: DIF generated, GUARD check ...passed 00:06:37.920 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:37.920 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:37.920 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:37.920 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:37.920 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:37.920 Test: generate copy: iovecs-len validate ...[2024-07-15 12:36:29.649360] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:37.920 passed 00:06:37.920 Test: generate copy: buffer alignment validate ...passed 00:06:37.920 00:06:37.920 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.920 suites 1 1 n/a 0 0 00:06:37.920 tests 26 26 26 0 0 00:06:37.920 asserts 115 115 115 0 n/a 00:06:37.920 00:06:37.920 Elapsed time = 0.002 seconds 00:06:37.920 00:06:37.920 real 0m0.473s 00:06:37.920 user 0m0.678s 00:06:37.920 sys 0m0.169s 00:06:37.920 12:36:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.920 12:36:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:37.920 ************************************ 00:06:37.920 END TEST accel_dif_functional_tests 00:06:37.920 ************************************ 00:06:38.178 12:36:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.178 00:06:38.178 real 0m32.652s 00:06:38.178 user 0m36.061s 00:06:38.178 sys 0m4.902s 00:06:38.178 12:36:29 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.178 12:36:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.178 ************************************ 00:06:38.178 END TEST accel 00:06:38.178 ************************************ 00:06:38.178 12:36:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.178 12:36:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.178 12:36:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.178 12:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.178 12:36:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.178 ************************************ 00:06:38.178 START TEST accel_rpc 00:06:38.178 ************************************ 00:06:38.178 12:36:29 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.178 * Looking for test storage... 00:06:38.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:38.178 12:36:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.178 12:36:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3754419 00:06:38.178 12:36:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3754419 00:06:38.178 12:36:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3754419 ']' 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.178 12:36:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.178 [2024-07-15 12:36:30.083506] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:38.178 [2024-07-15 12:36:30.083565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754419 ] 00:06:38.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.436 [2024-07-15 12:36:30.163963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.436 [2024-07-15 12:36:30.254613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.436 12:36:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.436 12:36:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.436 12:36:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:38.436 12:36:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:38.436 12:36:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:38.436 12:36:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:38.436 12:36:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:38.436 12:36:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.436 12:36:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.436 12:36:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.436 ************************************ 00:06:38.436 START TEST accel_assign_opcode 00:06:38.436 ************************************ 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.436 [2024-07-15 12:36:30.319129] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.436 [2024-07-15 12:36:30.327143] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.436 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.694 software 00:06:38.694 00:06:38.694 real 0m0.259s 00:06:38.694 user 0m0.050s 00:06:38.694 sys 0m0.011s 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.694 12:36:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.694 ************************************ 00:06:38.694 END TEST accel_assign_opcode 00:06:38.694 ************************************ 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:38.694 12:36:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3754419 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3754419 ']' 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3754419 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.694 12:36:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3754419 00:06:38.952 12:36:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.952 12:36:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.952 12:36:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3754419' 00:06:38.952 killing process with pid 3754419 00:06:38.952 12:36:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 3754419 00:06:38.952 12:36:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 3754419 00:06:39.211 00:06:39.211 real 0m1.053s 00:06:39.211 user 0m1.019s 00:06:39.211 sys 0m0.447s 00:06:39.211 12:36:30 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.211 12:36:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.211 ************************************ 00:06:39.211 END TEST accel_rpc 00:06:39.211 ************************************ 00:06:39.211 12:36:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.211 12:36:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.211 12:36:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.211 12:36:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.211 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.211 ************************************ 00:06:39.211 START TEST app_cmdline 00:06:39.211 ************************************ 00:06:39.211 12:36:31 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.211 * Looking for test storage... 00:06:39.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.211 12:36:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.211 12:36:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3754754 00:06:39.211 12:36:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3754754 00:06:39.472 12:36:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3754754 ']' 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.472 12:36:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 [2024-07-15 12:36:31.207765] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:06:39.472 [2024-07-15 12:36:31.207824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754754 ] 00:06:39.472 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.472 [2024-07-15 12:36:31.282596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.472 [2024-07-15 12:36:31.377703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.756 12:36:31 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.756 12:36:31 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:39.756 12:36:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:40.043 { 00:06:40.043 "version": "SPDK v24.09-pre git sha1 32a79de81", 00:06:40.043 "fields": { 00:06:40.043 "major": 24, 00:06:40.043 "minor": 9, 00:06:40.043 "patch": 0, 00:06:40.043 "suffix": "-pre", 00:06:40.043 "commit": "32a79de81" 00:06:40.043 } 00:06:40.043 } 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.043 12:36:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:40.043 12:36:31 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.301 request: 00:06:40.301 { 00:06:40.301 "method": "env_dpdk_get_mem_stats", 00:06:40.301 "req_id": 1 00:06:40.301 } 00:06:40.302 Got JSON-RPC error response 00:06:40.302 response: 00:06:40.302 { 00:06:40.302 "code": -32601, 00:06:40.302 "message": "Method not found" 00:06:40.302 } 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.302 12:36:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3754754 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3754754 ']' 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3754754 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.302 12:36:32 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3754754 00:06:40.559 12:36:32 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.559 12:36:32 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.559 12:36:32 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3754754' 00:06:40.560 killing process with pid 3754754 00:06:40.560 12:36:32 app_cmdline -- common/autotest_common.sh@967 -- # kill 3754754 00:06:40.560 12:36:32 app_cmdline -- common/autotest_common.sh@972 -- # wait 3754754 00:06:40.818 00:06:40.818 real 0m1.523s 00:06:40.818 user 0m2.115s 00:06:40.818 sys 0m0.469s 00:06:40.818 12:36:32 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.818 12:36:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.818 ************************************ 00:06:40.818 END TEST app_cmdline 00:06:40.818 ************************************ 00:06:40.818 12:36:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.818 12:36:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:40.818 12:36:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.818 12:36:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.818 12:36:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.818 ************************************ 00:06:40.818 START TEST version 00:06:40.818 ************************************ 00:06:40.818 12:36:32 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:40.818 * Looking for test storage... 00:06:40.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:40.818 12:36:32 version -- app/version.sh@17 -- # get_header_version major 00:06:40.818 12:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.818 12:36:32 version -- app/version.sh@14 -- # cut -f2 00:06:40.818 12:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.818 12:36:32 version -- app/version.sh@17 -- # major=24 00:06:40.818 12:36:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.818 12:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.818 12:36:32 version -- app/version.sh@14 -- # cut -f2 00:06:40.818 12:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.078 12:36:32 version -- app/version.sh@18 -- # minor=9 00:06:41.078 12:36:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.078 12:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.078 12:36:32 version -- app/version.sh@14 -- # cut -f2 00:06:41.078 12:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.078 12:36:32 version -- app/version.sh@19 -- # patch=0 00:06:41.078 12:36:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.078 12:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.078 12:36:32 version -- app/version.sh@14 -- # cut -f2 00:06:41.078 12:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.078 12:36:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.078 12:36:32 version -- app/version.sh@22 -- # version=24.9 00:06:41.078 12:36:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.078 12:36:32 version -- app/version.sh@28 -- # version=24.9rc0 00:06:41.078 12:36:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:41.078 12:36:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.078 12:36:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:41.078 12:36:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:41.078 00:06:41.078 real 0m0.166s 00:06:41.078 user 0m0.094s 00:06:41.078 sys 0m0.109s 00:06:41.078 12:36:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.078 12:36:32 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.078 ************************************ 00:06:41.078 END TEST version 00:06:41.078 ************************************ 00:06:41.078 12:36:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.078 12:36:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@198 -- # uname -s 00:06:41.078 12:36:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:41.078 12:36:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.078 12:36:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.078 12:36:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:41.078 12:36:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.078 12:36:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.078 12:36:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:41.078 12:36:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:41.078 12:36:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.078 12:36:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.078 12:36:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.078 12:36:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.078 ************************************ 00:06:41.078 START TEST nvmf_tcp 00:06:41.078 ************************************ 00:06:41.078 12:36:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.078 * Looking for test storage... 00:06:41.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.338 12:36:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.338 12:36:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.338 12:36:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.338 12:36:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.338 12:36:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.338 12:36:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.338 12:36:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:41.338 12:36:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:41.338 12:36:33 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.338 12:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:41.338 12:36:33 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.338 12:36:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.338 12:36:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.338 12:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.338 ************************************ 00:06:41.338 START TEST nvmf_example 00:06:41.338 ************************************ 00:06:41.338 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.338 * Looking for test storage... 00:06:41.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:41.339 12:36:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:47.905 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:47.905 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:47.905 Found net devices under 0000:af:00.0: cvl_0_0 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:47.905 Found net devices under 0000:af:00.1: cvl_0_1 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.905 12:36:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.905 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.905 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:06:47.905 00:06:47.905 --- 10.0.0.2 ping statistics --- 00:06:47.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.905 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:47.905 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:06:47.905 00:06:47.905 --- 10.0.0.1 ping statistics --- 00:06:47.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.906 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3758320 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3758320 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3758320 ']' 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.906 12:36:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.163 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:48.421 12:36:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:48.421 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.649 Initializing NVMe Controllers 00:07:00.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:00.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:00.649 Initialization complete. Launching workers. 00:07:00.649 ======================================================== 00:07:00.649 Latency(us) 00:07:00.649 Device Information : IOPS MiB/s Average min max 00:07:00.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11092.92 43.33 5769.64 903.09 17195.63 00:07:00.649 ======================================================== 00:07:00.649 Total : 11092.92 43.33 5769.64 903.09 17195.63 00:07:00.649 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.649 rmmod nvme_tcp 00:07:00.649 rmmod nvme_fabrics 00:07:00.649 rmmod nvme_keyring 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3758320 ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3758320 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3758320 ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3758320 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758320 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758320' 00:07:00.649 killing process with pid 3758320 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3758320 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3758320 00:07:00.649 nvmf threads initialize successfully 00:07:00.649 bdev subsystem init successfully 00:07:00.649 created a nvmf target service 00:07:00.649 create targets's poll groups done 00:07:00.649 all subsystems of target started 00:07:00.649 nvmf target is running 00:07:00.649 all subsystems of target stopped 00:07:00.649 destroy targets's poll groups done 00:07:00.649 destroyed the nvmf target service 00:07:00.649 bdev subsystem finish successfully 00:07:00.649 nvmf threads destroy successfully 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.649 12:36:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.907 00:07:00.907 real 0m19.751s 00:07:00.907 user 0m46.621s 00:07:00.907 sys 0m5.801s 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.907 12:36:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.907 ************************************ 00:07:00.907 END TEST nvmf_example 00:07:00.907 ************************************ 00:07:01.167 12:36:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:01.167 12:36:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:01.167 12:36:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.167 12:36:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.167 12:36:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.167 ************************************ 00:07:01.167 START TEST nvmf_filesystem 00:07:01.167 ************************************ 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:01.167 * Looking for test storage... 00:07:01.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:01.167 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:01.168 12:36:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:01.168 #define SPDK_CONFIG_H 00:07:01.168 #define SPDK_CONFIG_APPS 1 00:07:01.168 #define SPDK_CONFIG_ARCH native 00:07:01.168 #undef SPDK_CONFIG_ASAN 00:07:01.168 #undef SPDK_CONFIG_AVAHI 00:07:01.168 #undef SPDK_CONFIG_CET 00:07:01.168 #define SPDK_CONFIG_COVERAGE 1 00:07:01.168 #define SPDK_CONFIG_CROSS_PREFIX 00:07:01.168 #undef SPDK_CONFIG_CRYPTO 00:07:01.168 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:01.168 #undef SPDK_CONFIG_CUSTOMOCF 00:07:01.168 #undef SPDK_CONFIG_DAOS 00:07:01.168 #define SPDK_CONFIG_DAOS_DIR 00:07:01.168 #define SPDK_CONFIG_DEBUG 1 00:07:01.168 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:01.168 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:01.168 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:01.168 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:01.168 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:01.168 #undef SPDK_CONFIG_DPDK_UADK 00:07:01.168 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:01.168 #define SPDK_CONFIG_EXAMPLES 1 00:07:01.168 #undef SPDK_CONFIG_FC 00:07:01.168 #define SPDK_CONFIG_FC_PATH 00:07:01.168 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:01.168 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:01.168 #undef SPDK_CONFIG_FUSE 00:07:01.168 #undef SPDK_CONFIG_FUZZER 00:07:01.168 #define SPDK_CONFIG_FUZZER_LIB 00:07:01.168 #undef SPDK_CONFIG_GOLANG 00:07:01.168 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:01.168 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:01.168 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:01.168 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:01.168 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:01.168 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:01.168 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:01.168 #define SPDK_CONFIG_IDXD 1 00:07:01.168 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:01.168 #undef SPDK_CONFIG_IPSEC_MB 00:07:01.168 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:01.168 #define SPDK_CONFIG_ISAL 1 00:07:01.168 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:01.168 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:01.168 #define SPDK_CONFIG_LIBDIR 00:07:01.168 #undef SPDK_CONFIG_LTO 00:07:01.168 #define SPDK_CONFIG_MAX_LCORES 128 00:07:01.168 #define SPDK_CONFIG_NVME_CUSE 1 00:07:01.168 #undef SPDK_CONFIG_OCF 00:07:01.168 #define SPDK_CONFIG_OCF_PATH 00:07:01.168 #define SPDK_CONFIG_OPENSSL_PATH 00:07:01.168 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:01.168 #define SPDK_CONFIG_PGO_DIR 00:07:01.168 #undef SPDK_CONFIG_PGO_USE 00:07:01.168 #define SPDK_CONFIG_PREFIX /usr/local 00:07:01.168 #undef SPDK_CONFIG_RAID5F 00:07:01.168 #undef SPDK_CONFIG_RBD 00:07:01.168 #define SPDK_CONFIG_RDMA 1 00:07:01.168 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:01.168 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:01.168 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:01.168 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:01.168 #define SPDK_CONFIG_SHARED 1 00:07:01.168 #undef SPDK_CONFIG_SMA 00:07:01.168 #define SPDK_CONFIG_TESTS 1 00:07:01.168 #undef SPDK_CONFIG_TSAN 00:07:01.168 #define SPDK_CONFIG_UBLK 1 00:07:01.168 #define SPDK_CONFIG_UBSAN 1 00:07:01.168 #undef SPDK_CONFIG_UNIT_TESTS 00:07:01.168 #undef SPDK_CONFIG_URING 00:07:01.168 #define SPDK_CONFIG_URING_PATH 00:07:01.168 #undef SPDK_CONFIG_URING_ZNS 00:07:01.168 #undef SPDK_CONFIG_USDT 00:07:01.168 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:01.168 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:01.168 #define SPDK_CONFIG_VFIO_USER 1 00:07:01.168 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:01.168 #define SPDK_CONFIG_VHOST 1 00:07:01.168 #define SPDK_CONFIG_VIRTIO 1 00:07:01.168 #undef SPDK_CONFIG_VTUNE 00:07:01.168 #define SPDK_CONFIG_VTUNE_DIR 00:07:01.168 #define SPDK_CONFIG_WERROR 1 00:07:01.168 #define SPDK_CONFIG_WPDK_DIR 00:07:01.168 #undef SPDK_CONFIG_XNVME 00:07:01.168 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.168 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:01.169 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3761029 ]] 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3761029 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.tNiF1C 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:01.170 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tNiF1C/tests/target /tmp/spdk.tNiF1C 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954339328 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330090496 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83798913024 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501482496 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10702569472 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47195103232 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18890862592 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900299776 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9437184 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249940480 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250743296 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=802816 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450143744 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450147840 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:01.171 * Looking for test storage... 00:07:01.171 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83798913024 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12917161984 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:01.431 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.432 12:36:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.703 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.703 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.703 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.703 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.704 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.704 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.704 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.704 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:07:06.963 00:07:06.963 --- 10.0.0.2 ping statistics --- 00:07:06.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.963 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:07:06.963 00:07:06.963 --- 10.0.0.1 ping statistics --- 00:07:06.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.963 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.963 12:36:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 ************************************ 00:07:07.222 START TEST nvmf_filesystem_no_in_capsule 00:07:07.222 ************************************ 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3764215 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3764215 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3764215 ']' 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.222 12:36:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 [2024-07-15 12:36:59.008971] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:07:07.222 [2024-07-15 12:36:59.009025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.222 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.222 [2024-07-15 12:36:59.089629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.481 [2024-07-15 12:36:59.184978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.481 [2024-07-15 12:36:59.185024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.481 [2024-07-15 12:36:59.185035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.481 [2024-07-15 12:36:59.185043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.481 [2024-07-15 12:36:59.185050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.481 [2024-07-15 12:36:59.188280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.481 [2024-07-15 12:36:59.188344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.481 [2024-07-15 12:36:59.188456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.481 [2024-07-15 12:36:59.188456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.481 [2024-07-15 12:36:59.347596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.481 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.740 Malloc1 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.740 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.741 [2024-07-15 12:36:59.506598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:07.741 { 00:07:07.741 "name": "Malloc1", 00:07:07.741 "aliases": [ 00:07:07.741 "6b0c31b0-1e6a-4a3f-bb6e-355fc5d8b884" 00:07:07.741 ], 00:07:07.741 "product_name": "Malloc disk", 00:07:07.741 "block_size": 512, 00:07:07.741 "num_blocks": 1048576, 00:07:07.741 "uuid": "6b0c31b0-1e6a-4a3f-bb6e-355fc5d8b884", 00:07:07.741 "assigned_rate_limits": { 00:07:07.741 "rw_ios_per_sec": 0, 00:07:07.741 "rw_mbytes_per_sec": 0, 00:07:07.741 "r_mbytes_per_sec": 0, 00:07:07.741 "w_mbytes_per_sec": 0 00:07:07.741 }, 00:07:07.741 "claimed": true, 00:07:07.741 "claim_type": "exclusive_write", 00:07:07.741 "zoned": false, 00:07:07.741 "supported_io_types": { 00:07:07.741 "read": true, 00:07:07.741 "write": true, 00:07:07.741 "unmap": true, 00:07:07.741 "flush": true, 00:07:07.741 "reset": true, 00:07:07.741 "nvme_admin": false, 00:07:07.741 "nvme_io": false, 00:07:07.741 "nvme_io_md": false, 00:07:07.741 "write_zeroes": true, 00:07:07.741 "zcopy": true, 00:07:07.741 "get_zone_info": false, 00:07:07.741 "zone_management": false, 00:07:07.741 "zone_append": false, 00:07:07.741 "compare": false, 00:07:07.741 "compare_and_write": false, 00:07:07.741 "abort": true, 00:07:07.741 "seek_hole": false, 00:07:07.741 "seek_data": false, 00:07:07.741 "copy": true, 00:07:07.741 "nvme_iov_md": false 00:07:07.741 }, 00:07:07.741 "memory_domains": [ 00:07:07.741 { 00:07:07.741 "dma_device_id": "system", 00:07:07.741 "dma_device_type": 1 00:07:07.741 }, 00:07:07.741 { 00:07:07.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.741 "dma_device_type": 2 00:07:07.741 } 00:07:07.741 ], 00:07:07.741 "driver_specific": {} 00:07:07.741 } 00:07:07.741 ]' 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:07.741 12:36:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.116 12:37:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.116 12:37:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:09.116 12:37:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.116 12:37:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:09.116 12:37:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:11.650 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:11.909 12:37:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.844 ************************************ 00:07:12.844 START TEST filesystem_ext4 00:07:12.844 ************************************ 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:12.844 12:37:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:12.844 mke2fs 1.46.5 (30-Dec-2021) 00:07:13.103 Discarding device blocks: 0/522240 done 00:07:13.103 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:13.103 Filesystem UUID: 4df8e718-490c-486d-b3e8-72d8956ce11d 00:07:13.103 Superblock backups stored on blocks: 00:07:13.103 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:13.103 00:07:13.103 Allocating group tables: 0/64 done 00:07:13.103 Writing inode tables: 0/64 done 00:07:13.362 Creating journal (8192 blocks): done 00:07:13.362 Writing superblocks and filesystem accounting information: 0/64 done 00:07:13.362 00:07:13.362 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:13.362 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3764215 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.621 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.880 00:07:13.880 real 0m0.864s 00:07:13.880 user 0m0.025s 00:07:13.880 sys 0m0.064s 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:13.880 ************************************ 00:07:13.880 END TEST filesystem_ext4 00:07:13.880 ************************************ 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.880 ************************************ 00:07:13.880 START TEST filesystem_btrfs 00:07:13.880 ************************************ 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:13.880 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:13.880 btrfs-progs v6.6.2 00:07:13.880 See https://btrfs.readthedocs.io for more information. 00:07:13.880 00:07:13.880 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:13.880 NOTE: several default settings have changed in version 5.15, please make sure 00:07:13.880 this does not affect your deployments: 00:07:13.880 - DUP for metadata (-m dup) 00:07:13.881 - enabled no-holes (-O no-holes) 00:07:13.881 - enabled free-space-tree (-R free-space-tree) 00:07:13.881 00:07:13.881 Label: (null) 00:07:13.881 UUID: 17e3e9c7-3961-4a17-94bb-0c9b210e9a7b 00:07:13.881 Node size: 16384 00:07:13.881 Sector size: 4096 00:07:13.881 Filesystem size: 510.00MiB 00:07:13.881 Block group profiles: 00:07:13.881 Data: single 8.00MiB 00:07:13.881 Metadata: DUP 32.00MiB 00:07:13.881 System: DUP 8.00MiB 00:07:13.881 SSD detected: yes 00:07:13.881 Zoned device: no 00:07:13.881 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:13.881 Runtime features: free-space-tree 00:07:13.881 Checksum: crc32c 00:07:13.881 Number of devices: 1 00:07:13.881 Devices: 00:07:13.881 ID SIZE PATH 00:07:13.881 1 510.00MiB /dev/nvme0n1p1 00:07:13.881 00:07:13.881 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:13.881 12:37:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3764215 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.818 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.077 00:07:15.077 real 0m1.136s 00:07:15.077 user 0m0.026s 00:07:15.077 sys 0m0.125s 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:15.077 ************************************ 00:07:15.077 END TEST filesystem_btrfs 00:07:15.077 ************************************ 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.077 ************************************ 00:07:15.077 START TEST filesystem_xfs 00:07:15.077 ************************************ 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:15.077 12:37:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:15.077 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:15.077 = sectsz=512 attr=2, projid32bit=1 00:07:15.077 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:15.077 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:15.077 data = bsize=4096 blocks=130560, imaxpct=25 00:07:15.077 = sunit=0 swidth=0 blks 00:07:15.077 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:15.077 log =internal log bsize=4096 blocks=16384, version=2 00:07:15.077 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:15.077 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:16.011 Discarding blocks...Done. 00:07:16.011 12:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:16.011 12:37:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3764215 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.542 00:07:18.542 real 0m3.265s 00:07:18.542 user 0m0.023s 00:07:18.542 sys 0m0.073s 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:18.542 ************************************ 00:07:18.542 END TEST filesystem_xfs 00:07:18.542 ************************************ 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:18.542 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3764215 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3764215 ']' 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3764215 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3764215 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3764215' 00:07:18.801 killing process with pid 3764215 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3764215 00:07:18.801 12:37:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3764215 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:19.368 00:07:19.368 real 0m12.055s 00:07:19.368 user 0m46.958s 00:07:19.368 sys 0m1.301s 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 END TEST nvmf_filesystem_no_in_capsule 00:07:19.368 ************************************ 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 START TEST nvmf_filesystem_in_capsule 00:07:19.368 ************************************ 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3766548 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3766548 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3766548 ']' 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.368 12:37:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 [2024-07-15 12:37:11.135217] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:07:19.368 [2024-07-15 12:37:11.135260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.368 [2024-07-15 12:37:11.209937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.368 [2024-07-15 12:37:11.294915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.368 [2024-07-15 12:37:11.294962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.368 [2024-07-15 12:37:11.294972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.368 [2024-07-15 12:37:11.294981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.368 [2024-07-15 12:37:11.294988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.368 [2024-07-15 12:37:11.295092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.368 [2024-07-15 12:37:11.295203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.368 [2024-07-15 12:37:11.295301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.368 [2024-07-15 12:37:11.295302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.302 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 [2024-07-15 12:37:12.050886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 [2024-07-15 12:37:12.208020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:20.303 { 00:07:20.303 "name": "Malloc1", 00:07:20.303 "aliases": [ 00:07:20.303 "7c449518-040e-4a65-b49b-a98a0e5bc7a9" 00:07:20.303 ], 00:07:20.303 "product_name": "Malloc disk", 00:07:20.303 "block_size": 512, 00:07:20.303 "num_blocks": 1048576, 00:07:20.303 "uuid": "7c449518-040e-4a65-b49b-a98a0e5bc7a9", 00:07:20.303 "assigned_rate_limits": { 00:07:20.303 "rw_ios_per_sec": 0, 00:07:20.303 "rw_mbytes_per_sec": 0, 00:07:20.303 "r_mbytes_per_sec": 0, 00:07:20.303 "w_mbytes_per_sec": 0 00:07:20.303 }, 00:07:20.303 "claimed": true, 00:07:20.303 "claim_type": "exclusive_write", 00:07:20.303 "zoned": false, 00:07:20.303 "supported_io_types": { 00:07:20.303 "read": true, 00:07:20.303 "write": true, 00:07:20.303 "unmap": true, 00:07:20.303 "flush": true, 00:07:20.303 "reset": true, 00:07:20.303 "nvme_admin": false, 00:07:20.303 "nvme_io": false, 00:07:20.303 "nvme_io_md": false, 00:07:20.303 "write_zeroes": true, 00:07:20.303 "zcopy": true, 00:07:20.303 "get_zone_info": false, 00:07:20.303 "zone_management": false, 00:07:20.303 "zone_append": false, 00:07:20.303 "compare": false, 00:07:20.303 "compare_and_write": false, 00:07:20.303 "abort": true, 00:07:20.303 "seek_hole": false, 00:07:20.303 "seek_data": false, 00:07:20.303 "copy": true, 00:07:20.303 "nvme_iov_md": false 00:07:20.303 }, 00:07:20.303 "memory_domains": [ 00:07:20.303 { 00:07:20.303 "dma_device_id": "system", 00:07:20.303 "dma_device_type": 1 00:07:20.303 }, 00:07:20.303 { 00:07:20.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.303 "dma_device_type": 2 00:07:20.303 } 00:07:20.303 ], 00:07:20.303 "driver_specific": {} 00:07:20.303 } 00:07:20.303 ]' 00:07:20.303 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:20.561 12:37:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.934 12:37:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.934 12:37:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.934 12:37:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.934 12:37:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.934 12:37:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.834 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:24.093 12:37:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:25.028 12:37:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.964 ************************************ 00:07:25.964 START TEST filesystem_in_capsule_ext4 00:07:25.964 ************************************ 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:25.964 12:37:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.964 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.964 Discarding device blocks: 0/522240 done 00:07:25.964 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.964 Filesystem UUID: a71087a2-9a32-4894-b967-1a8cfd152226 00:07:25.964 Superblock backups stored on blocks: 00:07:25.964 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.964 00:07:25.964 Allocating group tables: 0/64 done 00:07:25.964 Writing inode tables: 0/64 done 00:07:26.532 Creating journal (8192 blocks): done 00:07:26.532 Writing superblocks and filesystem accounting information: 0/64 done 00:07:26.532 00:07:26.532 12:37:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:26.532 12:37:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3766548 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.577 00:07:27.577 real 0m1.638s 00:07:27.577 user 0m0.032s 00:07:27.577 sys 0m0.058s 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:27.577 ************************************ 00:07:27.577 END TEST filesystem_in_capsule_ext4 00:07:27.577 ************************************ 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.577 ************************************ 00:07:27.577 START TEST filesystem_in_capsule_btrfs 00:07:27.577 ************************************ 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:27.577 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.144 btrfs-progs v6.6.2 00:07:28.144 See https://btrfs.readthedocs.io for more information. 00:07:28.144 00:07:28.144 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.144 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.144 this does not affect your deployments: 00:07:28.144 - DUP for metadata (-m dup) 00:07:28.144 - enabled no-holes (-O no-holes) 00:07:28.144 - enabled free-space-tree (-R free-space-tree) 00:07:28.144 00:07:28.144 Label: (null) 00:07:28.144 UUID: 85361496-9b4c-44ce-8eb5-f0fc91ce7cc1 00:07:28.144 Node size: 16384 00:07:28.144 Sector size: 4096 00:07:28.144 Filesystem size: 510.00MiB 00:07:28.144 Block group profiles: 00:07:28.144 Data: single 8.00MiB 00:07:28.144 Metadata: DUP 32.00MiB 00:07:28.144 System: DUP 8.00MiB 00:07:28.144 SSD detected: yes 00:07:28.144 Zoned device: no 00:07:28.144 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.144 Runtime features: free-space-tree 00:07:28.144 Checksum: crc32c 00:07:28.144 Number of devices: 1 00:07:28.144 Devices: 00:07:28.144 ID SIZE PATH 00:07:28.144 1 510.00MiB /dev/nvme0n1p1 00:07:28.144 00:07:28.144 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:28.144 12:37:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3766548 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.403 00:07:28.403 real 0m0.790s 00:07:28.403 user 0m0.026s 00:07:28.403 sys 0m0.124s 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.403 ************************************ 00:07:28.403 END TEST filesystem_in_capsule_btrfs 00:07:28.403 ************************************ 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.403 ************************************ 00:07:28.403 START TEST filesystem_in_capsule_xfs 00:07:28.403 ************************************ 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:28.403 12:37:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:28.403 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:28.403 = sectsz=512 attr=2, projid32bit=1 00:07:28.403 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:28.403 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:28.403 data = bsize=4096 blocks=130560, imaxpct=25 00:07:28.403 = sunit=0 swidth=0 blks 00:07:28.404 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:28.404 log =internal log bsize=4096 blocks=16384, version=2 00:07:28.404 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:28.404 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:29.339 Discarding blocks...Done. 00:07:29.339 12:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:29.339 12:37:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3766548 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.873 00:07:31.873 real 0m3.367s 00:07:31.873 user 0m0.016s 00:07:31.873 sys 0m0.080s 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:31.873 ************************************ 00:07:31.873 END TEST filesystem_in_capsule_xfs 00:07:31.873 ************************************ 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:31.873 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.133 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:32.133 12:37:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3766548 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3766548 ']' 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3766548 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3766548 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3766548' 00:07:32.393 killing process with pid 3766548 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3766548 00:07:32.393 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3766548 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.652 00:07:32.652 real 0m13.470s 00:07:32.652 user 0m52.742s 00:07:32.652 sys 0m1.344s 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.652 ************************************ 00:07:32.652 END TEST nvmf_filesystem_in_capsule 00:07:32.652 ************************************ 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.652 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.911 rmmod nvme_tcp 00:07:32.911 rmmod nvme_fabrics 00:07:32.911 rmmod nvme_keyring 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.911 12:37:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.829 12:37:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.829 00:07:34.829 real 0m33.835s 00:07:34.829 user 1m41.383s 00:07:34.829 sys 0m7.219s 00:07:34.829 12:37:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.829 12:37:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 ************************************ 00:07:34.829 END TEST nvmf_filesystem 00:07:34.829 ************************************ 00:07:35.088 12:37:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.088 12:37:26 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.088 12:37:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.088 12:37:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.088 12:37:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.088 ************************************ 00:07:35.088 START TEST nvmf_target_discovery 00:07:35.088 ************************************ 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.088 * Looking for test storage... 00:07:35.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.088 12:37:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.089 12:37:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.655 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.655 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.655 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.656 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.656 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:07:41.656 00:07:41.656 --- 10.0.0.2 ping statistics --- 00:07:41.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.656 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:07:41.656 00:07:41.656 --- 10.0.0.1 ping statistics --- 00:07:41.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.656 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3772847 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3772847 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3772847 ']' 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.656 12:37:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.656 [2024-07-15 12:37:32.877070] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:07:41.656 [2024-07-15 12:37:32.877125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.656 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.656 [2024-07-15 12:37:32.964029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.656 [2024-07-15 12:37:33.053112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.656 [2024-07-15 12:37:33.053158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.656 [2024-07-15 12:37:33.053169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.656 [2024-07-15 12:37:33.053178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.656 [2024-07-15 12:37:33.053186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.656 [2024-07-15 12:37:33.053290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.656 [2024-07-15 12:37:33.053353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.656 [2024-07-15 12:37:33.053464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.656 [2024-07-15 12:37:33.053465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.915 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.915 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:41.915 12:37:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.915 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.915 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 [2024-07-15 12:37:33.874264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 Null1 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 [2024-07-15 12:37:33.926621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 Null2 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 Null3 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 Null4 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.174 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:42.433 00:07:42.433 Discovery Log Number of Records 6, Generation counter 6 00:07:42.433 =====Discovery Log Entry 0====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: current discovery subsystem 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4420 00:07:42.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: explicit discovery connections, duplicate discovery information 00:07:42.433 sectype: none 00:07:42.433 =====Discovery Log Entry 1====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: nvme subsystem 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4420 00:07:42.433 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: none 00:07:42.433 sectype: none 00:07:42.433 =====Discovery Log Entry 2====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: nvme subsystem 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4420 00:07:42.433 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: none 00:07:42.433 sectype: none 00:07:42.433 =====Discovery Log Entry 3====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: nvme subsystem 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4420 00:07:42.433 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: none 00:07:42.433 sectype: none 00:07:42.433 =====Discovery Log Entry 4====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: nvme subsystem 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4420 00:07:42.433 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: none 00:07:42.433 sectype: none 00:07:42.433 =====Discovery Log Entry 5====== 00:07:42.433 trtype: tcp 00:07:42.433 adrfam: ipv4 00:07:42.433 subtype: discovery subsystem referral 00:07:42.433 treq: not required 00:07:42.433 portid: 0 00:07:42.433 trsvcid: 4430 00:07:42.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.433 traddr: 10.0.0.2 00:07:42.433 eflags: none 00:07:42.433 sectype: none 00:07:42.433 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:42.433 Perform nvmf subsystem discovery via RPC 00:07:42.433 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:42.433 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.433 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.433 [ 00:07:42.433 { 00:07:42.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:42.433 "subtype": "Discovery", 00:07:42.433 "listen_addresses": [ 00:07:42.433 { 00:07:42.433 "trtype": "TCP", 00:07:42.433 "adrfam": "IPv4", 00:07:42.433 "traddr": "10.0.0.2", 00:07:42.433 "trsvcid": "4420" 00:07:42.433 } 00:07:42.433 ], 00:07:42.433 "allow_any_host": true, 00:07:42.433 "hosts": [] 00:07:42.433 }, 00:07:42.433 { 00:07:42.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.433 "subtype": "NVMe", 00:07:42.433 "listen_addresses": [ 00:07:42.433 { 00:07:42.433 "trtype": "TCP", 00:07:42.433 "adrfam": "IPv4", 00:07:42.433 "traddr": "10.0.0.2", 00:07:42.433 "trsvcid": "4420" 00:07:42.433 } 00:07:42.433 ], 00:07:42.433 "allow_any_host": true, 00:07:42.433 "hosts": [], 00:07:42.433 "serial_number": "SPDK00000000000001", 00:07:42.433 "model_number": "SPDK bdev Controller", 00:07:42.433 "max_namespaces": 32, 00:07:42.433 "min_cntlid": 1, 00:07:42.433 "max_cntlid": 65519, 00:07:42.433 "namespaces": [ 00:07:42.433 { 00:07:42.433 "nsid": 1, 00:07:42.433 "bdev_name": "Null1", 00:07:42.433 "name": "Null1", 00:07:42.433 "nguid": "16E003C3AC40436DB9D56E34F7481071", 00:07:42.433 "uuid": "16e003c3-ac40-436d-b9d5-6e34f7481071" 00:07:42.433 } 00:07:42.433 ] 00:07:42.433 }, 00:07:42.433 { 00:07:42.433 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:42.433 "subtype": "NVMe", 00:07:42.433 "listen_addresses": [ 00:07:42.433 { 00:07:42.433 "trtype": "TCP", 00:07:42.433 "adrfam": "IPv4", 00:07:42.433 "traddr": "10.0.0.2", 00:07:42.433 "trsvcid": "4420" 00:07:42.433 } 00:07:42.433 ], 00:07:42.433 "allow_any_host": true, 00:07:42.433 "hosts": [], 00:07:42.433 "serial_number": "SPDK00000000000002", 00:07:42.433 "model_number": "SPDK bdev Controller", 00:07:42.433 "max_namespaces": 32, 00:07:42.433 "min_cntlid": 1, 00:07:42.433 "max_cntlid": 65519, 00:07:42.433 "namespaces": [ 00:07:42.433 { 00:07:42.433 "nsid": 1, 00:07:42.433 "bdev_name": "Null2", 00:07:42.433 "name": "Null2", 00:07:42.433 "nguid": "9006D3BA90B4454682C90C774726A680", 00:07:42.433 "uuid": "9006d3ba-90b4-4546-82c9-0c774726a680" 00:07:42.433 } 00:07:42.433 ] 00:07:42.433 }, 00:07:42.433 { 00:07:42.433 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:42.433 "subtype": "NVMe", 00:07:42.433 "listen_addresses": [ 00:07:42.433 { 00:07:42.433 "trtype": "TCP", 00:07:42.433 "adrfam": "IPv4", 00:07:42.433 "traddr": "10.0.0.2", 00:07:42.433 "trsvcid": "4420" 00:07:42.433 } 00:07:42.433 ], 00:07:42.433 "allow_any_host": true, 00:07:42.433 "hosts": [], 00:07:42.433 "serial_number": "SPDK00000000000003", 00:07:42.433 "model_number": "SPDK bdev Controller", 00:07:42.433 "max_namespaces": 32, 00:07:42.433 "min_cntlid": 1, 00:07:42.434 "max_cntlid": 65519, 00:07:42.434 "namespaces": [ 00:07:42.434 { 00:07:42.434 "nsid": 1, 00:07:42.434 "bdev_name": "Null3", 00:07:42.434 "name": "Null3", 00:07:42.434 "nguid": "6F6B4D3F80984AD4A0115DD1A7322EA9", 00:07:42.434 "uuid": "6f6b4d3f-8098-4ad4-a011-5dd1a7322ea9" 00:07:42.434 } 00:07:42.434 ] 00:07:42.434 }, 00:07:42.434 { 00:07:42.434 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:42.434 "subtype": "NVMe", 00:07:42.434 "listen_addresses": [ 00:07:42.434 { 00:07:42.434 "trtype": "TCP", 00:07:42.434 "adrfam": "IPv4", 00:07:42.434 "traddr": "10.0.0.2", 00:07:42.434 "trsvcid": "4420" 00:07:42.434 } 00:07:42.434 ], 00:07:42.434 "allow_any_host": true, 00:07:42.434 "hosts": [], 00:07:42.434 "serial_number": "SPDK00000000000004", 00:07:42.434 "model_number": "SPDK bdev Controller", 00:07:42.434 "max_namespaces": 32, 00:07:42.434 "min_cntlid": 1, 00:07:42.434 "max_cntlid": 65519, 00:07:42.434 "namespaces": [ 00:07:42.434 { 00:07:42.434 "nsid": 1, 00:07:42.434 "bdev_name": "Null4", 00:07:42.434 "name": "Null4", 00:07:42.434 "nguid": "4FF5A2E1DB4849E5827AE0E9FEEBD6D3", 00:07:42.434 "uuid": "4ff5a2e1-db48-49e5-827a-e0e9feebd6d3" 00:07:42.434 } 00:07:42.434 ] 00:07:42.434 } 00:07:42.434 ] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.693 rmmod nvme_tcp 00:07:42.693 rmmod nvme_fabrics 00:07:42.693 rmmod nvme_keyring 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3772847 ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3772847 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3772847 ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3772847 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3772847 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3772847' 00:07:42.693 killing process with pid 3772847 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3772847 00:07:42.693 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3772847 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.952 12:37:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.856 12:37:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.856 00:07:44.856 real 0m9.971s 00:07:44.856 user 0m8.403s 00:07:44.856 sys 0m4.925s 00:07:44.856 12:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.856 12:37:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.856 ************************************ 00:07:44.856 END TEST nvmf_target_discovery 00:07:44.856 ************************************ 00:07:45.116 12:37:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.116 12:37:36 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.116 12:37:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.116 12:37:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.116 12:37:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.116 ************************************ 00:07:45.116 START TEST nvmf_referrals 00:07:45.116 ************************************ 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.116 * Looking for test storage... 00:07:45.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.116 12:37:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.699 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.700 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.700 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:51.700 00:07:51.700 --- 10.0.0.2 ping statistics --- 00:07:51.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.700 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:51.700 00:07:51.700 --- 10.0.0.1 ping statistics --- 00:07:51.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.700 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3776832 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3776832 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3776832 ']' 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.700 12:37:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.700 [2024-07-15 12:37:42.976795] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:07:51.700 [2024-07-15 12:37:42.976862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.700 [2024-07-15 12:37:43.064588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.700 [2024-07-15 12:37:43.155602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.700 [2024-07-15 12:37:43.155645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.700 [2024-07-15 12:37:43.155655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.700 [2024-07-15 12:37:43.155664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.700 [2024-07-15 12:37:43.155672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.700 [2024-07-15 12:37:43.155725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.700 [2024-07-15 12:37:43.155838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.700 [2024-07-15 12:37:43.155951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.700 [2024-07-15 12:37:43.155951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 [2024-07-15 12:37:43.967207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 [2024-07-15 12:37:43.987405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.268 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.527 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.786 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.044 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.044 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.044 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.044 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.044 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.045 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.304 12:37:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.304 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.562 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.820 rmmod nvme_tcp 00:07:53.820 rmmod nvme_fabrics 00:07:53.820 rmmod nvme_keyring 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3776832 ']' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3776832 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3776832 ']' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3776832 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3776832 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3776832' 00:07:53.820 killing process with pid 3776832 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3776832 00:07:53.820 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3776832 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.079 12:37:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.614 12:37:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.614 00:07:56.614 real 0m11.162s 00:07:56.614 user 0m13.686s 00:07:56.614 sys 0m5.244s 00:07:56.614 12:37:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.614 12:37:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.614 ************************************ 00:07:56.614 END TEST nvmf_referrals 00:07:56.614 ************************************ 00:07:56.614 12:37:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:56.614 12:37:48 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.614 12:37:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.614 12:37:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.614 12:37:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.614 ************************************ 00:07:56.614 START TEST nvmf_connect_disconnect 00:07:56.614 ************************************ 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.614 * Looking for test storage... 00:07:56.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.614 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.615 12:37:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:01.910 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:01.910 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:01.910 Found net devices under 0000:af:00.0: cvl_0_0 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:01.910 Found net devices under 0000:af:00.1: cvl_0_1 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:08:01.910 00:08:01.910 --- 10.0.0.2 ping statistics --- 00:08:01.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.910 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:01.910 00:08:01.910 --- 10.0.0.1 ping statistics --- 00:08:01.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.910 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:01.910 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.911 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3780956 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3780956 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3780956 ']' 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.172 12:37:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.172 [2024-07-15 12:37:53.944997] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:08:02.172 [2024-07-15 12:37:53.945051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.172 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.172 [2024-07-15 12:37:54.029856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.431 [2024-07-15 12:37:54.124320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.431 [2024-07-15 12:37:54.124360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.431 [2024-07-15 12:37:54.124370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.431 [2024-07-15 12:37:54.124379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.431 [2024-07-15 12:37:54.124387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.431 [2024-07-15 12:37:54.124437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.431 [2024-07-15 12:37:54.124551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.431 [2024-07-15 12:37:54.124661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.431 [2024-07-15 12:37:54.124662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.998 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.998 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:02.998 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 [2024-07-15 12:37:54.864789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 [2024-07-15 12:37:54.924795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:02.999 12:37:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:07.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.377 rmmod nvme_tcp 00:08:20.377 rmmod nvme_fabrics 00:08:20.377 rmmod nvme_keyring 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3780956 ']' 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3780956 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3780956 ']' 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3780956 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3780956 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3780956' 00:08:20.377 killing process with pid 3780956 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3780956 00:08:20.377 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3780956 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.636 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.637 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.637 12:38:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.540 12:38:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.540 00:08:22.540 real 0m26.384s 00:08:22.540 user 1m13.903s 00:08:22.540 sys 0m5.726s 00:08:22.540 12:38:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.540 12:38:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.540 ************************************ 00:08:22.540 END TEST nvmf_connect_disconnect 00:08:22.540 ************************************ 00:08:22.799 12:38:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.799 12:38:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:22.799 12:38:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.799 12:38:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.799 12:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.799 ************************************ 00:08:22.799 START TEST nvmf_multitarget 00:08:22.799 ************************************ 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:22.799 * Looking for test storage... 00:08:22.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.799 12:38:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:29.367 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:29.367 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.367 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:29.368 Found net devices under 0000:af:00.0: cvl_0_0 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:29.368 Found net devices under 0000:af:00.1: cvl_0_1 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:29.368 00:08:29.368 --- 10.0.0.2 ping statistics --- 00:08:29.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.368 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:29.368 00:08:29.368 --- 10.0.0.1 ping statistics --- 00:08:29.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.368 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3787945 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3787945 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3787945 ']' 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.368 12:38:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.368 [2024-07-15 12:38:20.539975] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:08:29.368 [2024-07-15 12:38:20.540037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.368 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.368 [2024-07-15 12:38:20.629308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.368 [2024-07-15 12:38:20.720368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.368 [2024-07-15 12:38:20.720412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.368 [2024-07-15 12:38:20.720422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.368 [2024-07-15 12:38:20.720431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.368 [2024-07-15 12:38:20.720438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.368 [2024-07-15 12:38:20.720480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.368 [2024-07-15 12:38:20.720596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.368 [2024-07-15 12:38:20.720708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.368 [2024-07-15 12:38:20.720709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:29.627 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:29.886 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:29.886 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:29.886 "nvmf_tgt_1" 00:08:29.886 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:30.144 "nvmf_tgt_2" 00:08:30.144 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:30.144 12:38:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:30.144 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:30.144 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:30.402 true 00:08:30.402 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:30.402 true 00:08:30.402 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:30.402 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.660 rmmod nvme_tcp 00:08:30.660 rmmod nvme_fabrics 00:08:30.660 rmmod nvme_keyring 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3787945 ']' 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3787945 00:08:30.660 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3787945 ']' 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3787945 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3787945 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3787945' 00:08:30.661 killing process with pid 3787945 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3787945 00:08:30.661 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3787945 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.919 12:38:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.454 12:38:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.454 00:08:33.454 real 0m10.257s 00:08:33.454 user 0m10.592s 00:08:33.454 sys 0m4.906s 00:08:33.454 12:38:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.454 12:38:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.454 ************************************ 00:08:33.454 END TEST nvmf_multitarget 00:08:33.454 ************************************ 00:08:33.454 12:38:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.454 12:38:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:33.454 12:38:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.454 12:38:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.454 12:38:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.454 ************************************ 00:08:33.454 START TEST nvmf_rpc 00:08:33.454 ************************************ 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:33.454 * Looking for test storage... 00:08:33.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.454 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.455 12:38:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:38.730 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:38.730 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.730 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:38.731 Found net devices under 0000:af:00.0: cvl_0_0 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:38.731 Found net devices under 0000:af:00.1: cvl_0_1 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.731 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:38.990 00:08:38.990 --- 10.0.0.2 ping statistics --- 00:08:38.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.990 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:38.990 00:08:38.990 --- 10.0.0.1 ping statistics --- 00:08:38.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.990 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3791960 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3791960 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3791960 ']' 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.990 12:38:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.990 [2024-07-15 12:38:30.927045] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:08:38.991 [2024-07-15 12:38:30.927103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.250 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.250 [2024-07-15 12:38:31.011224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.250 [2024-07-15 12:38:31.101610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.250 [2024-07-15 12:38:31.101652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.250 [2024-07-15 12:38:31.101662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.250 [2024-07-15 12:38:31.101671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.250 [2024-07-15 12:38:31.101678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.250 [2024-07-15 12:38:31.101721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.250 [2024-07-15 12:38:31.101834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.250 [2024-07-15 12:38:31.101947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.250 [2024-07-15 12:38:31.101946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:40.185 "tick_rate": 2200000000, 00:08:40.185 "poll_groups": [ 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_000", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_001", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_002", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_003", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [] 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }' 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:40.185 12:38:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 [2024-07-15 12:38:32.033918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.185 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:40.185 "tick_rate": 2200000000, 00:08:40.185 "poll_groups": [ 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_000", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [ 00:08:40.185 { 00:08:40.185 "trtype": "TCP" 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_001", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [ 00:08:40.185 { 00:08:40.185 "trtype": "TCP" 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_002", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.185 "current_io_qpairs": 0, 00:08:40.185 "pending_bdev_io": 0, 00:08:40.185 "completed_nvme_io": 0, 00:08:40.185 "transports": [ 00:08:40.185 { 00:08:40.185 "trtype": "TCP" 00:08:40.185 } 00:08:40.185 ] 00:08:40.185 }, 00:08:40.185 { 00:08:40.185 "name": "nvmf_tgt_poll_group_003", 00:08:40.185 "admin_qpairs": 0, 00:08:40.185 "io_qpairs": 0, 00:08:40.185 "current_admin_qpairs": 0, 00:08:40.186 "current_io_qpairs": 0, 00:08:40.186 "pending_bdev_io": 0, 00:08:40.186 "completed_nvme_io": 0, 00:08:40.186 "transports": [ 00:08:40.186 { 00:08:40.186 "trtype": "TCP" 00:08:40.186 } 00:08:40.186 ] 00:08:40.186 } 00:08:40.186 ] 00:08:40.186 }' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:40.186 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 Malloc1 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 [2024-07-15 12:38:32.222447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.444 [2024-07-15 12:38:32.251175] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:40.444 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:40.444 could not add new controller: failed to write to nvme-fabrics device 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.444 12:38:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.822 12:38:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.822 12:38:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:41.822 12:38:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.822 12:38:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:41.822 12:38:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:43.724 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.983 [2024-07-15 12:38:35.747668] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:43.983 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:43.983 could not add new controller: failed to write to nvme-fabrics device 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.983 12:38:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.361 12:38:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.361 12:38:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.361 12:38:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.361 12:38:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.361 12:38:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.265 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.524 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.525 [2024-07-15 12:38:39.392900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.525 12:38:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.902 12:38:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.902 12:38:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.902 12:38:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.903 12:38:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.903 12:38:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.824 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 [2024-07-15 12:38:42.896923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.083 12:38:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:52.460 12:38:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.460 12:38:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.460 12:38:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.460 12:38:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.460 12:38:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.364 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 [2024-07-15 12:38:46.364155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.624 12:38:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.000 12:38:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.000 12:38:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:56.000 12:38:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.000 12:38:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:56.000 12:38:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:57.904 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 [2024-07-15 12:38:49.970351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.177 12:38:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.636 12:38:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:59.636 12:38:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:59.636 12:38:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.636 12:38:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:59.636 12:38:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:01.540 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 [2024-07-15 12:38:53.658261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.801 12:38:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.178 12:38:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.178 12:38:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.178 12:38:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.178 12:38:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.178 12:38:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 [2024-07-15 12:38:57.210236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 [2024-07-15 12:38:57.258373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.714 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 [2024-07-15 12:38:57.310570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 [2024-07-15 12:38:57.358759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 [2024-07-15 12:38:57.406953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:05.715 "tick_rate": 2200000000, 00:09:05.715 "poll_groups": [ 00:09:05.715 { 00:09:05.715 "name": "nvmf_tgt_poll_group_000", 00:09:05.715 "admin_qpairs": 2, 00:09:05.715 "io_qpairs": 196, 00:09:05.715 "current_admin_qpairs": 0, 00:09:05.715 "current_io_qpairs": 0, 00:09:05.715 "pending_bdev_io": 0, 00:09:05.715 "completed_nvme_io": 291, 00:09:05.715 "transports": [ 00:09:05.715 { 00:09:05.715 "trtype": "TCP" 00:09:05.715 } 00:09:05.715 ] 00:09:05.715 }, 00:09:05.715 { 00:09:05.715 "name": "nvmf_tgt_poll_group_001", 00:09:05.715 "admin_qpairs": 2, 00:09:05.715 "io_qpairs": 196, 00:09:05.715 "current_admin_qpairs": 0, 00:09:05.715 "current_io_qpairs": 0, 00:09:05.715 "pending_bdev_io": 0, 00:09:05.715 "completed_nvme_io": 252, 00:09:05.715 "transports": [ 00:09:05.715 { 00:09:05.715 "trtype": "TCP" 00:09:05.715 } 00:09:05.715 ] 00:09:05.715 }, 00:09:05.715 { 00:09:05.715 "name": "nvmf_tgt_poll_group_002", 00:09:05.715 "admin_qpairs": 1, 00:09:05.715 "io_qpairs": 196, 00:09:05.715 "current_admin_qpairs": 0, 00:09:05.715 "current_io_qpairs": 0, 00:09:05.715 "pending_bdev_io": 0, 00:09:05.715 "completed_nvme_io": 253, 00:09:05.715 "transports": [ 00:09:05.715 { 00:09:05.715 "trtype": "TCP" 00:09:05.715 } 00:09:05.715 ] 00:09:05.715 }, 00:09:05.715 { 00:09:05.715 "name": "nvmf_tgt_poll_group_003", 00:09:05.715 "admin_qpairs": 2, 00:09:05.715 "io_qpairs": 196, 00:09:05.715 "current_admin_qpairs": 0, 00:09:05.715 "current_io_qpairs": 0, 00:09:05.715 "pending_bdev_io": 0, 00:09:05.715 "completed_nvme_io": 338, 00:09:05.715 "transports": [ 00:09:05.715 { 00:09:05.715 "trtype": "TCP" 00:09:05.715 } 00:09:05.715 ] 00:09:05.715 } 00:09:05.715 ] 00:09:05.715 }' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.715 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.715 rmmod nvme_tcp 00:09:05.716 rmmod nvme_fabrics 00:09:05.716 rmmod nvme_keyring 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3791960 ']' 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3791960 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3791960 ']' 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3791960 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.716 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3791960 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3791960' 00:09:05.975 killing process with pid 3791960 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3791960 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3791960 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.975 12:38:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.512 12:38:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.512 00:09:08.512 real 0m35.107s 00:09:08.512 user 1m48.242s 00:09:08.512 sys 0m6.514s 00:09:08.512 12:38:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.512 12:38:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.512 ************************************ 00:09:08.512 END TEST nvmf_rpc 00:09:08.512 ************************************ 00:09:08.512 12:39:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.512 12:39:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:08.512 12:39:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.512 12:39:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.512 12:39:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.512 ************************************ 00:09:08.512 START TEST nvmf_invalid 00:09:08.512 ************************************ 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:08.512 * Looking for test storage... 00:09:08.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.512 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.513 12:39:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:13.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:13.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:13.787 Found net devices under 0000:af:00.0: cvl_0_0 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:13.787 Found net devices under 0000:af:00.1: cvl_0_1 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.787 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:14.046 00:09:14.046 --- 10.0.0.2 ping statistics --- 00:09:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.046 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:09:14.046 00:09:14.046 --- 10.0.0.1 ping statistics --- 00:09:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.046 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.046 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.305 12:39:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:14.305 12:39:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.305 12:39:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.305 12:39:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3800697 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3800697 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3800697 ']' 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.305 12:39:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.305 [2024-07-15 12:39:06.059194] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:09:14.305 [2024-07-15 12:39:06.059261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.305 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.305 [2024-07-15 12:39:06.144932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.305 [2024-07-15 12:39:06.236325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.305 [2024-07-15 12:39:06.236369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.305 [2024-07-15 12:39:06.236380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.305 [2024-07-15 12:39:06.236389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.305 [2024-07-15 12:39:06.236397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.305 [2024-07-15 12:39:06.236496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.305 [2024-07-15 12:39:06.236606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.305 [2024-07-15 12:39:06.236717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.305 [2024-07-15 12:39:06.236718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.240 12:39:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.240 12:39:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:15.240 12:39:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.240 12:39:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.241 12:39:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.241 12:39:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.241 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:15.241 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24635 00:09:15.499 [2024-07-15 12:39:07.279269] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:15.499 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:15.499 { 00:09:15.499 "nqn": "nqn.2016-06.io.spdk:cnode24635", 00:09:15.499 "tgt_name": "foobar", 00:09:15.499 "method": "nvmf_create_subsystem", 00:09:15.499 "req_id": 1 00:09:15.499 } 00:09:15.499 Got JSON-RPC error response 00:09:15.499 response: 00:09:15.499 { 00:09:15.499 "code": -32603, 00:09:15.499 "message": "Unable to find target foobar" 00:09:15.499 }' 00:09:15.499 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:15.499 { 00:09:15.499 "nqn": "nqn.2016-06.io.spdk:cnode24635", 00:09:15.499 "tgt_name": "foobar", 00:09:15.499 "method": "nvmf_create_subsystem", 00:09:15.499 "req_id": 1 00:09:15.499 } 00:09:15.499 Got JSON-RPC error response 00:09:15.499 response: 00:09:15.499 { 00:09:15.499 "code": -32603, 00:09:15.499 "message": "Unable to find target foobar" 00:09:15.499 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:15.499 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:15.499 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29966 00:09:15.758 [2024-07-15 12:39:07.459977] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29966: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:15.758 { 00:09:15.758 "nqn": "nqn.2016-06.io.spdk:cnode29966", 00:09:15.758 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:15.758 "method": "nvmf_create_subsystem", 00:09:15.758 "req_id": 1 00:09:15.758 } 00:09:15.758 Got JSON-RPC error response 00:09:15.758 response: 00:09:15.758 { 00:09:15.758 "code": -32602, 00:09:15.758 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:15.758 }' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:15.758 { 00:09:15.758 "nqn": "nqn.2016-06.io.spdk:cnode29966", 00:09:15.758 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:15.758 "method": "nvmf_create_subsystem", 00:09:15.758 "req_id": 1 00:09:15.758 } 00:09:15.758 Got JSON-RPC error response 00:09:15.758 response: 00:09:15.758 { 00:09:15.758 "code": -32602, 00:09:15.758 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:15.758 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8032 00:09:15.758 [2024-07-15 12:39:07.640668] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8032: invalid model number 'SPDK_Controller' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:15.758 { 00:09:15.758 "nqn": "nqn.2016-06.io.spdk:cnode8032", 00:09:15.758 "model_number": "SPDK_Controller\u001f", 00:09:15.758 "method": "nvmf_create_subsystem", 00:09:15.758 "req_id": 1 00:09:15.758 } 00:09:15.758 Got JSON-RPC error response 00:09:15.758 response: 00:09:15.758 { 00:09:15.758 "code": -32602, 00:09:15.758 "message": "Invalid MN SPDK_Controller\u001f" 00:09:15.758 }' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:15.758 { 00:09:15.758 "nqn": "nqn.2016-06.io.spdk:cnode8032", 00:09:15.758 "model_number": "SPDK_Controller\u001f", 00:09:15.758 "method": "nvmf_create_subsystem", 00:09:15.758 "req_id": 1 00:09:15.758 } 00:09:15.758 Got JSON-RPC error response 00:09:15.758 response: 00:09:15.758 { 00:09:15.758 "code": -32602, 00:09:15.758 "message": "Invalid MN SPDK_Controller\u001f" 00:09:15.758 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:15.758 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.759 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.018 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@29 -- # string='\-RIl,,M|US%pv!v4P'\''={E' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-RIl,,M|US%pv!v4P'\''={E' 00:09:16.019 12:39:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-RIl,,M|US%pv!v4P'\''={E' nqn.2016-06.io.spdk:cnode8838 00:09:16.279 [2024-07-15 12:39:08.038196] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8838: invalid serial number '\-RIl,,M|US%pv!v4P'={E' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:16.279 { 00:09:16.279 "nqn": "nqn.2016-06.io.spdk:cnode8838", 00:09:16.279 "serial_number": "\\-RIl,,M|US%pv!v4P'\''={E", 00:09:16.279 "method": "nvmf_create_subsystem", 00:09:16.279 "req_id": 1 00:09:16.279 } 00:09:16.279 Got JSON-RPC error response 00:09:16.279 response: 00:09:16.279 { 00:09:16.279 "code": -32602, 00:09:16.279 "message": "Invalid SN \\-RIl,,M|US%pv!v4P'\''={E" 00:09:16.279 }' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:16.279 { 00:09:16.279 "nqn": "nqn.2016-06.io.spdk:cnode8838", 00:09:16.279 "serial_number": "\\-RIl,,M|US%pv!v4P'={E", 00:09:16.279 "method": "nvmf_create_subsystem", 00:09:16.279 "req_id": 1 00:09:16.279 } 00:09:16.279 Got JSON-RPC error response 00:09:16.279 response: 00:09:16.279 { 00:09:16.279 "code": -32602, 00:09:16.279 "message": "Invalid SN \\-RIl,,M|US%pv!v4P'={E" 00:09:16.279 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:16.279 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:16.280 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y' 00:09:16.540 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y' nqn.2016-06.io.spdk:cnode31037 00:09:16.798 [2024-07-15 12:39:08.499975] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31037: invalid model number 'tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y' 00:09:16.798 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:16.799 { 00:09:16.799 "nqn": "nqn.2016-06.io.spdk:cnode31037", 00:09:16.799 "model_number": "tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y", 00:09:16.799 "method": "nvmf_create_subsystem", 00:09:16.799 "req_id": 1 00:09:16.799 } 00:09:16.799 Got JSON-RPC error response 00:09:16.799 response: 00:09:16.799 { 00:09:16.799 "code": -32602, 00:09:16.799 "message": "Invalid MN tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y" 00:09:16.799 }' 00:09:16.799 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:16.799 { 00:09:16.799 "nqn": "nqn.2016-06.io.spdk:cnode31037", 00:09:16.799 "model_number": "tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y", 00:09:16.799 "method": "nvmf_create_subsystem", 00:09:16.799 "req_id": 1 00:09:16.799 } 00:09:16.799 Got JSON-RPC error response 00:09:16.799 response: 00:09:16.799 { 00:09:16.799 "code": -32602, 00:09:16.799 "message": "Invalid MN tIg*B_^:749vDPu8FFNo]jgE!|U~E,L5B!i[Xld(Y" 00:09:16.799 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:16.799 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:17.057 [2024-07-15 12:39:08.765124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.057 12:39:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:17.316 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:17.316 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:17.316 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:17.316 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:17.316 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:17.574 [2024-07-15 12:39:09.291286] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:17.574 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:17.574 { 00:09:17.574 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.574 "listen_address": { 00:09:17.574 "trtype": "tcp", 00:09:17.574 "traddr": "", 00:09:17.574 "trsvcid": "4421" 00:09:17.574 }, 00:09:17.574 "method": "nvmf_subsystem_remove_listener", 00:09:17.574 "req_id": 1 00:09:17.574 } 00:09:17.574 Got JSON-RPC error response 00:09:17.574 response: 00:09:17.574 { 00:09:17.574 "code": -32602, 00:09:17.574 "message": "Invalid parameters" 00:09:17.574 }' 00:09:17.574 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:17.574 { 00:09:17.574 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.574 "listen_address": { 00:09:17.574 "trtype": "tcp", 00:09:17.574 "traddr": "", 00:09:17.574 "trsvcid": "4421" 00:09:17.574 }, 00:09:17.574 "method": "nvmf_subsystem_remove_listener", 00:09:17.574 "req_id": 1 00:09:17.574 } 00:09:17.574 Got JSON-RPC error response 00:09:17.574 response: 00:09:17.574 { 00:09:17.574 "code": -32602, 00:09:17.574 "message": "Invalid parameters" 00:09:17.574 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:17.574 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8441 -i 0 00:09:17.832 [2024-07-15 12:39:09.560201] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8441: invalid cntlid range [0-65519] 00:09:17.832 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:17.832 { 00:09:17.832 "nqn": "nqn.2016-06.io.spdk:cnode8441", 00:09:17.832 "min_cntlid": 0, 00:09:17.832 "method": "nvmf_create_subsystem", 00:09:17.832 "req_id": 1 00:09:17.832 } 00:09:17.832 Got JSON-RPC error response 00:09:17.832 response: 00:09:17.832 { 00:09:17.832 "code": -32602, 00:09:17.832 "message": "Invalid cntlid range [0-65519]" 00:09:17.832 }' 00:09:17.832 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:17.832 { 00:09:17.832 "nqn": "nqn.2016-06.io.spdk:cnode8441", 00:09:17.832 "min_cntlid": 0, 00:09:17.832 "method": "nvmf_create_subsystem", 00:09:17.832 "req_id": 1 00:09:17.832 } 00:09:17.832 Got JSON-RPC error response 00:09:17.832 response: 00:09:17.832 { 00:09:17.832 "code": -32602, 00:09:17.832 "message": "Invalid cntlid range [0-65519]" 00:09:17.832 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:17.832 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11611 -i 65520 00:09:18.089 [2024-07-15 12:39:09.825224] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11611: invalid cntlid range [65520-65519] 00:09:18.089 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:18.089 { 00:09:18.089 "nqn": "nqn.2016-06.io.spdk:cnode11611", 00:09:18.089 "min_cntlid": 65520, 00:09:18.089 "method": "nvmf_create_subsystem", 00:09:18.089 "req_id": 1 00:09:18.089 } 00:09:18.089 Got JSON-RPC error response 00:09:18.089 response: 00:09:18.089 { 00:09:18.089 "code": -32602, 00:09:18.089 "message": "Invalid cntlid range [65520-65519]" 00:09:18.089 }' 00:09:18.089 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:18.089 { 00:09:18.089 "nqn": "nqn.2016-06.io.spdk:cnode11611", 00:09:18.089 "min_cntlid": 65520, 00:09:18.089 "method": "nvmf_create_subsystem", 00:09:18.089 "req_id": 1 00:09:18.089 } 00:09:18.089 Got JSON-RPC error response 00:09:18.089 response: 00:09:18.089 { 00:09:18.089 "code": -32602, 00:09:18.089 "message": "Invalid cntlid range [65520-65519]" 00:09:18.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.089 12:39:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11956 -I 0 00:09:18.347 [2024-07-15 12:39:10.078165] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11956: invalid cntlid range [1-0] 00:09:18.347 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:18.347 { 00:09:18.347 "nqn": "nqn.2016-06.io.spdk:cnode11956", 00:09:18.347 "max_cntlid": 0, 00:09:18.347 "method": "nvmf_create_subsystem", 00:09:18.347 "req_id": 1 00:09:18.347 } 00:09:18.347 Got JSON-RPC error response 00:09:18.347 response: 00:09:18.347 { 00:09:18.347 "code": -32602, 00:09:18.347 "message": "Invalid cntlid range [1-0]" 00:09:18.347 }' 00:09:18.347 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:18.347 { 00:09:18.347 "nqn": "nqn.2016-06.io.spdk:cnode11956", 00:09:18.347 "max_cntlid": 0, 00:09:18.347 "method": "nvmf_create_subsystem", 00:09:18.347 "req_id": 1 00:09:18.347 } 00:09:18.347 Got JSON-RPC error response 00:09:18.347 response: 00:09:18.347 { 00:09:18.347 "code": -32602, 00:09:18.347 "message": "Invalid cntlid range [1-0]" 00:09:18.347 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.347 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6267 -I 65520 00:09:18.606 [2024-07-15 12:39:10.343244] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6267: invalid cntlid range [1-65520] 00:09:18.606 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:18.606 { 00:09:18.606 "nqn": "nqn.2016-06.io.spdk:cnode6267", 00:09:18.606 "max_cntlid": 65520, 00:09:18.606 "method": "nvmf_create_subsystem", 00:09:18.606 "req_id": 1 00:09:18.606 } 00:09:18.606 Got JSON-RPC error response 00:09:18.606 response: 00:09:18.606 { 00:09:18.606 "code": -32602, 00:09:18.606 "message": "Invalid cntlid range [1-65520]" 00:09:18.606 }' 00:09:18.606 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:18.606 { 00:09:18.606 "nqn": "nqn.2016-06.io.spdk:cnode6267", 00:09:18.606 "max_cntlid": 65520, 00:09:18.606 "method": "nvmf_create_subsystem", 00:09:18.606 "req_id": 1 00:09:18.606 } 00:09:18.606 Got JSON-RPC error response 00:09:18.606 response: 00:09:18.606 { 00:09:18.606 "code": -32602, 00:09:18.606 "message": "Invalid cntlid range [1-65520]" 00:09:18.606 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.606 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27934 -i 6 -I 5 00:09:18.606 [2024-07-15 12:39:10.531997] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27934: invalid cntlid range [6-5] 00:09:18.864 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:18.864 { 00:09:18.864 "nqn": "nqn.2016-06.io.spdk:cnode27934", 00:09:18.864 "min_cntlid": 6, 00:09:18.864 "max_cntlid": 5, 00:09:18.864 "method": "nvmf_create_subsystem", 00:09:18.864 "req_id": 1 00:09:18.864 } 00:09:18.864 Got JSON-RPC error response 00:09:18.864 response: 00:09:18.864 { 00:09:18.864 "code": -32602, 00:09:18.864 "message": "Invalid cntlid range [6-5]" 00:09:18.864 }' 00:09:18.864 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:18.864 { 00:09:18.864 "nqn": "nqn.2016-06.io.spdk:cnode27934", 00:09:18.864 "min_cntlid": 6, 00:09:18.864 "max_cntlid": 5, 00:09:18.864 "method": "nvmf_create_subsystem", 00:09:18.864 "req_id": 1 00:09:18.864 } 00:09:18.864 Got JSON-RPC error response 00:09:18.864 response: 00:09:18.864 { 00:09:18.864 "code": -32602, 00:09:18.864 "message": "Invalid cntlid range [6-5]" 00:09:18.865 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:18.865 { 00:09:18.865 "name": "foobar", 00:09:18.865 "method": "nvmf_delete_target", 00:09:18.865 "req_id": 1 00:09:18.865 } 00:09:18.865 Got JSON-RPC error response 00:09:18.865 response: 00:09:18.865 { 00:09:18.865 "code": -32602, 00:09:18.865 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:18.865 }' 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:18.865 { 00:09:18.865 "name": "foobar", 00:09:18.865 "method": "nvmf_delete_target", 00:09:18.865 "req_id": 1 00:09:18.865 } 00:09:18.865 Got JSON-RPC error response 00:09:18.865 response: 00:09:18.865 { 00:09:18.865 "code": -32602, 00:09:18.865 "message": "The specified target doesn't exist, cannot delete it." 00:09:18.865 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.865 rmmod nvme_tcp 00:09:18.865 rmmod nvme_fabrics 00:09:18.865 rmmod nvme_keyring 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3800697 ']' 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3800697 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3800697 ']' 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3800697 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.865 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800697 00:09:19.123 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.123 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.123 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800697' 00:09:19.123 killing process with pid 3800697 00:09:19.123 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3800697 00:09:19.123 12:39:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3800697 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.123 12:39:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.655 12:39:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.655 00:09:21.655 real 0m13.049s 00:09:21.655 user 0m23.619s 00:09:21.655 sys 0m5.529s 00:09:21.655 12:39:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.655 12:39:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:21.655 ************************************ 00:09:21.655 END TEST nvmf_invalid 00:09:21.655 ************************************ 00:09:21.655 12:39:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:21.655 12:39:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.655 12:39:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.655 12:39:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.655 12:39:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.655 ************************************ 00:09:21.655 START TEST nvmf_abort 00:09:21.655 ************************************ 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.655 * Looking for test storage... 00:09:21.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.655 12:39:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:26.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:26.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:26.924 Found net devices under 0000:af:00.0: cvl_0_0 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:26.924 Found net devices under 0000:af:00.1: cvl_0_1 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.924 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.182 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.182 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.183 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.183 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.183 12:39:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:09:27.183 00:09:27.183 --- 10.0.0.2 ping statistics --- 00:09:27.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.183 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:09:27.183 00:09:27.183 --- 10.0.0.1 ping statistics --- 00:09:27.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.183 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.183 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3805765 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3805765 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3805765 ']' 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.441 12:39:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.441 [2024-07-15 12:39:19.207006] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:09:27.441 [2024-07-15 12:39:19.207063] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.441 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.441 [2024-07-15 12:39:19.294619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.699 [2024-07-15 12:39:19.395985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.699 [2024-07-15 12:39:19.396037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.699 [2024-07-15 12:39:19.396049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.699 [2024-07-15 12:39:19.396067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.699 [2024-07-15 12:39:19.396076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.699 [2024-07-15 12:39:19.396211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.699 [2024-07-15 12:39:19.396322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.699 [2024-07-15 12:39:19.396326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.263 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.263 [2024-07-15 12:39:20.198876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 Malloc0 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 Delay0 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 [2024-07-15 12:39:20.283523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.521 12:39:20 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:28.521 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.521 [2024-07-15 12:39:20.402637] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:31.048 Initializing NVMe Controllers 00:09:31.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:31.048 controller IO queue size 128 less than required 00:09:31.048 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:31.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:31.048 Initialization complete. Launching workers. 00:09:31.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29410 00:09:31.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29471, failed to submit 62 00:09:31.048 success 29414, unsuccess 57, failed 0 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.048 rmmod nvme_tcp 00:09:31.048 rmmod nvme_fabrics 00:09:31.048 rmmod nvme_keyring 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3805765 ']' 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3805765 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3805765 ']' 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3805765 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805765 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805765' 00:09:31.048 killing process with pid 3805765 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3805765 00:09:31.048 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3805765 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.307 12:39:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.209 12:39:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.209 00:09:33.209 real 0m11.904s 00:09:33.209 user 0m14.195s 00:09:33.209 sys 0m5.403s 00:09:33.209 12:39:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.209 12:39:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:33.209 ************************************ 00:09:33.209 END TEST nvmf_abort 00:09:33.209 ************************************ 00:09:33.209 12:39:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.209 12:39:25 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.209 12:39:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.209 12:39:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.209 12:39:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.209 ************************************ 00:09:33.209 START TEST nvmf_ns_hotplug_stress 00:09:33.209 ************************************ 00:09:33.209 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.468 * Looking for test storage... 00:09:33.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.468 12:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:40.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:40.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.033 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:40.034 Found net devices under 0000:af:00.0: cvl_0_0 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:40.034 Found net devices under 0000:af:00.1: cvl_0_1 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.034 12:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:09:40.034 00:09:40.034 --- 10.0.0.2 ping statistics --- 00:09:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.034 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:09:40.034 00:09:40.034 --- 10.0.0.1 ping statistics --- 00:09:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.034 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3810036 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3810036 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3810036 ']' 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.034 12:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.034 [2024-07-15 12:39:31.214951] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:09:40.034 [2024-07-15 12:39:31.215012] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.034 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.034 [2024-07-15 12:39:31.302404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.034 [2024-07-15 12:39:31.413497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.034 [2024-07-15 12:39:31.413538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.034 [2024-07-15 12:39:31.413551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.034 [2024-07-15 12:39:31.413562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.034 [2024-07-15 12:39:31.413572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.034 [2024-07-15 12:39:31.413699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.034 [2024-07-15 12:39:31.413811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.034 [2024-07-15 12:39:31.413814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.312 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.312 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:40.312 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.313 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.313 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.313 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.313 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:40.313 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:40.571 [2024-07-15 12:39:32.347922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.571 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:40.829 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.089 [2024-07-15 12:39:32.875654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.089 12:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.373 12:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:41.638 Malloc0 00:09:41.638 12:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.895 Delay0 00:09:41.895 12:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.152 12:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:42.410 NULL1 00:09:42.410 12:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:42.668 12:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:42.668 12:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3810814 00:09:42.668 12:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:42.668 12:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.668 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.040 Read completed with error (sct=0, sc=11) 00:09:44.040 12:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.040 12:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:44.040 12:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:44.298 true 00:09:44.298 12:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:44.298 12:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.261 12:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.520 12:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:45.520 12:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:45.778 true 00:09:45.779 12:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:45.779 12:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.037 12:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.296 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:46.296 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:46.296 true 00:09:46.554 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:46.554 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.813 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.071 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:47.072 12:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:47.072 true 00:09:47.330 12:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:47.330 12:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.265 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.524 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:48.524 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:48.524 true 00:09:48.782 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:48.782 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.041 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.298 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:49.298 12:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:49.298 true 00:09:49.555 12:39:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:49.555 12:39:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.490 12:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.752 12:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:50.752 12:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:51.009 true 00:09:51.009 12:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:51.009 12:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.267 12:39:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.525 12:39:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:51.525 12:39:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:51.782 true 00:09:51.782 12:39:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:51.782 12:39:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.716 12:39:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.974 12:39:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:52.974 12:39:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:53.232 true 00:09:53.232 12:39:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:53.232 12:39:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.490 12:39:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.748 12:39:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:53.748 12:39:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:54.006 true 00:09:54.006 12:39:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:54.006 12:39:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.264 12:39:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.522 12:39:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:54.522 12:39:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:54.780 true 00:09:54.780 12:39:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:54.780 12:39:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.715 12:39:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.974 12:39:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:55.974 12:39:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:56.232 true 00:09:56.232 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:56.232 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.491 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.749 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:56.749 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:57.027 true 00:09:57.027 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:57.027 12:39:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.285 12:39:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.543 12:39:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:57.543 12:39:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:57.801 true 00:09:57.801 12:39:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:57.801 12:39:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.736 12:39:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.994 12:39:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:58.994 12:39:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:59.258 true 00:09:59.258 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:09:59.258 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.515 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.772 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:59.772 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:00.030 true 00:10:00.030 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:00.030 12:39:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.288 12:39:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.546 12:39:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:00.546 12:39:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:00.805 true 00:10:00.805 12:39:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:00.805 12:39:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.188 12:39:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.188 12:39:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:02.188 12:39:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:02.446 true 00:10:02.446 12:39:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:02.446 12:39:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.705 12:39:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.964 12:39:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:02.964 12:39:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:03.223 true 00:10:03.223 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:03.223 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.482 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.741 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:03.741 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:04.000 true 00:10:04.000 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:04.000 12:39:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.937 12:39:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.195 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:05.195 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:05.453 true 00:10:05.453 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:05.453 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.711 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.970 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:05.970 12:39:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:06.228 true 00:10:06.228 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:06.228 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.487 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.745 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:06.745 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:07.004 true 00:10:07.004 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:07.004 12:39:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.940 12:39:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.199 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:08.199 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:08.457 true 00:10:08.457 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:08.457 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.715 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.974 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:08.974 12:40:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:09.232 true 00:10:09.233 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:09.233 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.492 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.750 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:09.750 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:10.008 true 00:10:10.008 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:10.008 12:40:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.411 12:40:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.411 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:11.411 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:11.715 true 00:10:11.715 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:11.715 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.973 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.232 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:12.232 12:40:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:12.490 true 00:10:12.490 12:40:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:12.490 12:40:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.748 12:40:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.007 12:40:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:13.007 12:40:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:13.007 Initializing NVMe Controllers 00:10:13.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.007 Controller IO queue size 128, less than required. 00:10:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:13.007 Controller IO queue size 128, less than required. 00:10:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:13.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:13.007 Initialization complete. Launching workers. 00:10:13.007 ======================================================== 00:10:13.007 Latency(us) 00:10:13.007 Device Information : IOPS MiB/s Average min max 00:10:13.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 392.18 0.19 135860.53 4898.48 1045649.89 00:10:13.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3584.74 1.75 35709.09 7549.68 570808.49 00:10:13.007 ======================================================== 00:10:13.007 Total : 3976.91 1.94 45585.33 4898.48 1045649.89 00:10:13.007 00:10:13.266 true 00:10:13.266 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3810814 00:10:13.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3810814) - No such process 00:10:13.266 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3810814 00:10:13.266 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.525 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.784 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:13.784 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:13.784 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:13.784 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.784 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:14.044 null0 00:10:14.044 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.044 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.044 12:40:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:14.302 null1 00:10:14.302 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.302 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.302 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:14.302 null2 00:10:14.303 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.303 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.303 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:14.561 null3 00:10:14.561 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.561 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.561 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:14.819 null4 00:10:14.819 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.819 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.819 12:40:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:15.077 null5 00:10:15.077 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.077 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.077 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:15.336 null6 00:10:15.336 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.336 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.336 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:15.595 null7 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.595 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3816852 3816854 3816857 3816860 3816863 3816866 3816869 3816872 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.596 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.854 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.855 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.113 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.371 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.630 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.888 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.146 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.147 12:40:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.147 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.406 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.664 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.921 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.178 12:40:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.178 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.435 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.693 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.952 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.211 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.470 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.728 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.985 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.242 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.242 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.242 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.242 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.242 12:40:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.242 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.499 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.756 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.013 rmmod nvme_tcp 00:10:21.013 rmmod nvme_fabrics 00:10:21.013 rmmod nvme_keyring 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3810036 ']' 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3810036 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3810036 ']' 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3810036 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810036 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810036' 00:10:21.013 killing process with pid 3810036 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3810036 00:10:21.013 12:40:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3810036 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.579 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.580 12:40:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.483 12:40:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:23.483 00:10:23.483 real 0m50.215s 00:10:23.483 user 3m31.420s 00:10:23.483 sys 0m16.025s 00:10:23.483 12:40:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.483 12:40:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.483 ************************************ 00:10:23.483 END TEST nvmf_ns_hotplug_stress 00:10:23.483 ************************************ 00:10:23.483 12:40:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.483 12:40:15 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.483 12:40:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.483 12:40:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.483 12:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.483 ************************************ 00:10:23.483 START TEST nvmf_connect_stress 00:10:23.483 ************************************ 00:10:23.483 12:40:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.742 * Looking for test storage... 00:10:23.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.742 12:40:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:30.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:30.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:30.316 Found net devices under 0000:af:00.0: cvl_0_0 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.316 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:30.317 Found net devices under 0000:af:00.1: cvl_0_1 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:30.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:10:30.317 00:10:30.317 --- 10.0.0.2 ping statistics --- 00:10:30.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.317 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:10:30.317 00:10:30.317 --- 10.0.0.1 ping statistics --- 00:10:30.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.317 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3821643 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3821643 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3821643 ']' 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.317 12:40:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.317 [2024-07-15 12:40:21.426808] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:10:30.317 [2024-07-15 12:40:21.426879] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.317 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.317 [2024-07-15 12:40:21.522967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.317 [2024-07-15 12:40:21.628731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.317 [2024-07-15 12:40:21.628776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.317 [2024-07-15 12:40:21.628789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.317 [2024-07-15 12:40:21.628800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.317 [2024-07-15 12:40:21.628809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.317 [2024-07-15 12:40:21.628930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.317 [2024-07-15 12:40:21.629040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.317 [2024-07-15 12:40:21.629042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.576 [2024-07-15 12:40:22.407566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.576 [2024-07-15 12:40:22.440466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.576 NULL1 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3821924 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.576 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.834 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.835 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.093 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.093 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:31.093 12:40:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.093 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.093 12:40:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.352 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.352 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:31.352 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.352 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.352 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.610 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.610 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:31.610 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.611 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.611 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.177 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.177 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:32.177 12:40:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.177 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.177 12:40:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.435 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.435 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:32.435 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.435 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.435 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.693 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.693 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:32.693 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.693 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.693 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.952 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.952 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:32.952 12:40:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.952 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.952 12:40:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.210 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.210 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:33.210 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.210 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.210 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.778 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.778 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:33.778 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.778 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.778 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.037 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.037 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:34.037 12:40:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.037 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.037 12:40:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.296 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.296 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:34.296 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.296 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.296 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.555 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.555 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:34.555 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.555 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.555 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.122 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.122 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:35.122 12:40:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.122 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.122 12:40:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.380 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:35.380 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.380 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.380 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.637 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.637 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:35.637 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.637 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.637 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.895 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.895 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:35.895 12:40:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.895 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.895 12:40:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.153 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.153 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:36.153 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.153 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.153 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.720 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.720 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:36.720 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.720 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.720 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.978 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.978 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:36.979 12:40:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.979 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.979 12:40:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.238 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.238 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:37.238 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.238 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.238 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.497 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:37.497 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.497 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.497 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.062 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.062 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:38.062 12:40:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.062 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.062 12:40:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.320 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.320 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:38.320 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.320 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.320 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.579 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.579 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:38.579 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.579 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.579 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.837 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.837 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:38.837 12:40:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.837 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.837 12:40:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.095 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:39.095 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.095 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.095 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.663 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.663 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:39.663 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.663 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.663 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.921 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.921 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:39.921 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.921 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.921 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.216 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.216 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:40.216 12:40:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.216 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.216 12:40:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.498 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.498 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:40.498 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.498 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.498 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.758 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3821924 00:10:40.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3821924) - No such process 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3821924 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.759 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.759 rmmod nvme_tcp 00:10:40.759 rmmod nvme_fabrics 00:10:40.759 rmmod nvme_keyring 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3821643 ']' 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3821643 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3821643 ']' 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3821643 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821643 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821643' 00:10:41.017 killing process with pid 3821643 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3821643 00:10:41.017 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3821643 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.276 12:40:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.180 12:40:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.180 00:10:43.180 real 0m19.651s 00:10:43.180 user 0m41.766s 00:10:43.180 sys 0m8.196s 00:10:43.180 12:40:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.180 12:40:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.180 ************************************ 00:10:43.180 END TEST nvmf_connect_stress 00:10:43.180 ************************************ 00:10:43.180 12:40:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.180 12:40:35 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:43.180 12:40:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.180 12:40:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.180 12:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.438 ************************************ 00:10:43.438 START TEST nvmf_fused_ordering 00:10:43.438 ************************************ 00:10:43.438 12:40:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:43.438 * Looking for test storage... 00:10:43.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.439 12:40:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.081 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.081 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.081 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.081 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.081 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:10:50.082 00:10:50.082 --- 10.0.0.2 ping statistics --- 00:10:50.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.082 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:10:50.082 00:10:50.082 --- 10.0.0.1 ping statistics --- 00:10:50.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.082 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.082 12:40:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3827359 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3827359 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3827359 ']' 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.082 12:40:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.082 [2024-07-15 12:40:41.088524] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:10:50.082 [2024-07-15 12:40:41.088588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.082 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.082 [2024-07-15 12:40:41.177771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.082 [2024-07-15 12:40:41.279696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.082 [2024-07-15 12:40:41.279742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.082 [2024-07-15 12:40:41.279756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.082 [2024-07-15 12:40:41.279768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.082 [2024-07-15 12:40:41.279778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.082 [2024-07-15 12:40:41.279806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 [2024-07-15 12:40:42.084962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 [2024-07-15 12:40:42.105159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 NULL1 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.341 12:40:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:50.341 [2024-07-15 12:40:42.166426] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:10:50.341 [2024-07-15 12:40:42.166497] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827525 ] 00:10:50.341 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.907 Attached to nqn.2016-06.io.spdk:cnode1 00:10:50.907 Namespace ID: 1 size: 1GB 00:10:50.907 fused_ordering(0) 00:10:50.907 fused_ordering(1) 00:10:50.907 fused_ordering(2) 00:10:50.907 fused_ordering(3) 00:10:50.907 fused_ordering(4) 00:10:50.907 fused_ordering(5) 00:10:50.907 fused_ordering(6) 00:10:50.907 fused_ordering(7) 00:10:50.907 fused_ordering(8) 00:10:50.907 fused_ordering(9) 00:10:50.907 fused_ordering(10) 00:10:50.907 fused_ordering(11) 00:10:50.907 fused_ordering(12) 00:10:50.907 fused_ordering(13) 00:10:50.907 fused_ordering(14) 00:10:50.907 fused_ordering(15) 00:10:50.907 fused_ordering(16) 00:10:50.907 fused_ordering(17) 00:10:50.907 fused_ordering(18) 00:10:50.907 fused_ordering(19) 00:10:50.907 fused_ordering(20) 00:10:50.907 fused_ordering(21) 00:10:50.907 fused_ordering(22) 00:10:50.907 fused_ordering(23) 00:10:50.907 fused_ordering(24) 00:10:50.907 fused_ordering(25) 00:10:50.907 fused_ordering(26) 00:10:50.907 fused_ordering(27) 00:10:50.907 fused_ordering(28) 00:10:50.907 fused_ordering(29) 00:10:50.907 fused_ordering(30) 00:10:50.907 fused_ordering(31) 00:10:50.907 fused_ordering(32) 00:10:50.907 fused_ordering(33) 00:10:50.907 fused_ordering(34) 00:10:50.907 fused_ordering(35) 00:10:50.907 fused_ordering(36) 00:10:50.907 fused_ordering(37) 00:10:50.907 fused_ordering(38) 00:10:50.907 fused_ordering(39) 00:10:50.907 fused_ordering(40) 00:10:50.907 fused_ordering(41) 00:10:50.907 fused_ordering(42) 00:10:50.907 fused_ordering(43) 00:10:50.907 fused_ordering(44) 00:10:50.907 fused_ordering(45) 00:10:50.907 fused_ordering(46) 00:10:50.907 fused_ordering(47) 00:10:50.907 fused_ordering(48) 00:10:50.907 fused_ordering(49) 00:10:50.907 fused_ordering(50) 00:10:50.907 fused_ordering(51) 00:10:50.907 fused_ordering(52) 00:10:50.907 fused_ordering(53) 00:10:50.907 fused_ordering(54) 00:10:50.907 fused_ordering(55) 00:10:50.907 fused_ordering(56) 00:10:50.907 fused_ordering(57) 00:10:50.907 fused_ordering(58) 00:10:50.907 fused_ordering(59) 00:10:50.907 fused_ordering(60) 00:10:50.907 fused_ordering(61) 00:10:50.907 fused_ordering(62) 00:10:50.907 fused_ordering(63) 00:10:50.907 fused_ordering(64) 00:10:50.907 fused_ordering(65) 00:10:50.907 fused_ordering(66) 00:10:50.907 fused_ordering(67) 00:10:50.907 fused_ordering(68) 00:10:50.907 fused_ordering(69) 00:10:50.907 fused_ordering(70) 00:10:50.907 fused_ordering(71) 00:10:50.907 fused_ordering(72) 00:10:50.907 fused_ordering(73) 00:10:50.907 fused_ordering(74) 00:10:50.907 fused_ordering(75) 00:10:50.907 fused_ordering(76) 00:10:50.907 fused_ordering(77) 00:10:50.907 fused_ordering(78) 00:10:50.907 fused_ordering(79) 00:10:50.907 fused_ordering(80) 00:10:50.907 fused_ordering(81) 00:10:50.907 fused_ordering(82) 00:10:50.907 fused_ordering(83) 00:10:50.907 fused_ordering(84) 00:10:50.907 fused_ordering(85) 00:10:50.907 fused_ordering(86) 00:10:50.907 fused_ordering(87) 00:10:50.907 fused_ordering(88) 00:10:50.907 fused_ordering(89) 00:10:50.907 fused_ordering(90) 00:10:50.907 fused_ordering(91) 00:10:50.907 fused_ordering(92) 00:10:50.907 fused_ordering(93) 00:10:50.907 fused_ordering(94) 00:10:50.907 fused_ordering(95) 00:10:50.908 fused_ordering(96) 00:10:50.908 fused_ordering(97) 00:10:50.908 fused_ordering(98) 00:10:50.908 fused_ordering(99) 00:10:50.908 fused_ordering(100) 00:10:50.908 fused_ordering(101) 00:10:50.908 fused_ordering(102) 00:10:50.908 fused_ordering(103) 00:10:50.908 fused_ordering(104) 00:10:50.908 fused_ordering(105) 00:10:50.908 fused_ordering(106) 00:10:50.908 fused_ordering(107) 00:10:50.908 fused_ordering(108) 00:10:50.908 fused_ordering(109) 00:10:50.908 fused_ordering(110) 00:10:50.908 fused_ordering(111) 00:10:50.908 fused_ordering(112) 00:10:50.908 fused_ordering(113) 00:10:50.908 fused_ordering(114) 00:10:50.908 fused_ordering(115) 00:10:50.908 fused_ordering(116) 00:10:50.908 fused_ordering(117) 00:10:50.908 fused_ordering(118) 00:10:50.908 fused_ordering(119) 00:10:50.908 fused_ordering(120) 00:10:50.908 fused_ordering(121) 00:10:50.908 fused_ordering(122) 00:10:50.908 fused_ordering(123) 00:10:50.908 fused_ordering(124) 00:10:50.908 fused_ordering(125) 00:10:50.908 fused_ordering(126) 00:10:50.908 fused_ordering(127) 00:10:50.908 fused_ordering(128) 00:10:50.908 fused_ordering(129) 00:10:50.908 fused_ordering(130) 00:10:50.908 fused_ordering(131) 00:10:50.908 fused_ordering(132) 00:10:50.908 fused_ordering(133) 00:10:50.908 fused_ordering(134) 00:10:50.908 fused_ordering(135) 00:10:50.908 fused_ordering(136) 00:10:50.908 fused_ordering(137) 00:10:50.908 fused_ordering(138) 00:10:50.908 fused_ordering(139) 00:10:50.908 fused_ordering(140) 00:10:50.908 fused_ordering(141) 00:10:50.908 fused_ordering(142) 00:10:50.908 fused_ordering(143) 00:10:50.908 fused_ordering(144) 00:10:50.908 fused_ordering(145) 00:10:50.908 fused_ordering(146) 00:10:50.908 fused_ordering(147) 00:10:50.908 fused_ordering(148) 00:10:50.908 fused_ordering(149) 00:10:50.908 fused_ordering(150) 00:10:50.908 fused_ordering(151) 00:10:50.908 fused_ordering(152) 00:10:50.908 fused_ordering(153) 00:10:50.908 fused_ordering(154) 00:10:50.908 fused_ordering(155) 00:10:50.908 fused_ordering(156) 00:10:50.908 fused_ordering(157) 00:10:50.908 fused_ordering(158) 00:10:50.908 fused_ordering(159) 00:10:50.908 fused_ordering(160) 00:10:50.908 fused_ordering(161) 00:10:50.908 fused_ordering(162) 00:10:50.908 fused_ordering(163) 00:10:50.908 fused_ordering(164) 00:10:50.908 fused_ordering(165) 00:10:50.908 fused_ordering(166) 00:10:50.908 fused_ordering(167) 00:10:50.908 fused_ordering(168) 00:10:50.908 fused_ordering(169) 00:10:50.908 fused_ordering(170) 00:10:50.908 fused_ordering(171) 00:10:50.908 fused_ordering(172) 00:10:50.908 fused_ordering(173) 00:10:50.908 fused_ordering(174) 00:10:50.908 fused_ordering(175) 00:10:50.908 fused_ordering(176) 00:10:50.908 fused_ordering(177) 00:10:50.908 fused_ordering(178) 00:10:50.908 fused_ordering(179) 00:10:50.908 fused_ordering(180) 00:10:50.908 fused_ordering(181) 00:10:50.908 fused_ordering(182) 00:10:50.908 fused_ordering(183) 00:10:50.908 fused_ordering(184) 00:10:50.908 fused_ordering(185) 00:10:50.908 fused_ordering(186) 00:10:50.908 fused_ordering(187) 00:10:50.908 fused_ordering(188) 00:10:50.908 fused_ordering(189) 00:10:50.908 fused_ordering(190) 00:10:50.908 fused_ordering(191) 00:10:50.908 fused_ordering(192) 00:10:50.908 fused_ordering(193) 00:10:50.908 fused_ordering(194) 00:10:50.908 fused_ordering(195) 00:10:50.908 fused_ordering(196) 00:10:50.908 fused_ordering(197) 00:10:50.908 fused_ordering(198) 00:10:50.908 fused_ordering(199) 00:10:50.908 fused_ordering(200) 00:10:50.908 fused_ordering(201) 00:10:50.908 fused_ordering(202) 00:10:50.908 fused_ordering(203) 00:10:50.908 fused_ordering(204) 00:10:50.908 fused_ordering(205) 00:10:51.166 fused_ordering(206) 00:10:51.166 fused_ordering(207) 00:10:51.166 fused_ordering(208) 00:10:51.166 fused_ordering(209) 00:10:51.166 fused_ordering(210) 00:10:51.166 fused_ordering(211) 00:10:51.166 fused_ordering(212) 00:10:51.166 fused_ordering(213) 00:10:51.166 fused_ordering(214) 00:10:51.166 fused_ordering(215) 00:10:51.166 fused_ordering(216) 00:10:51.166 fused_ordering(217) 00:10:51.166 fused_ordering(218) 00:10:51.166 fused_ordering(219) 00:10:51.166 fused_ordering(220) 00:10:51.166 fused_ordering(221) 00:10:51.166 fused_ordering(222) 00:10:51.166 fused_ordering(223) 00:10:51.166 fused_ordering(224) 00:10:51.166 fused_ordering(225) 00:10:51.166 fused_ordering(226) 00:10:51.166 fused_ordering(227) 00:10:51.166 fused_ordering(228) 00:10:51.166 fused_ordering(229) 00:10:51.166 fused_ordering(230) 00:10:51.166 fused_ordering(231) 00:10:51.166 fused_ordering(232) 00:10:51.166 fused_ordering(233) 00:10:51.166 fused_ordering(234) 00:10:51.166 fused_ordering(235) 00:10:51.166 fused_ordering(236) 00:10:51.166 fused_ordering(237) 00:10:51.166 fused_ordering(238) 00:10:51.166 fused_ordering(239) 00:10:51.166 fused_ordering(240) 00:10:51.166 fused_ordering(241) 00:10:51.166 fused_ordering(242) 00:10:51.166 fused_ordering(243) 00:10:51.166 fused_ordering(244) 00:10:51.166 fused_ordering(245) 00:10:51.166 fused_ordering(246) 00:10:51.166 fused_ordering(247) 00:10:51.166 fused_ordering(248) 00:10:51.166 fused_ordering(249) 00:10:51.166 fused_ordering(250) 00:10:51.166 fused_ordering(251) 00:10:51.166 fused_ordering(252) 00:10:51.166 fused_ordering(253) 00:10:51.166 fused_ordering(254) 00:10:51.166 fused_ordering(255) 00:10:51.166 fused_ordering(256) 00:10:51.166 fused_ordering(257) 00:10:51.166 fused_ordering(258) 00:10:51.166 fused_ordering(259) 00:10:51.166 fused_ordering(260) 00:10:51.166 fused_ordering(261) 00:10:51.166 fused_ordering(262) 00:10:51.166 fused_ordering(263) 00:10:51.166 fused_ordering(264) 00:10:51.166 fused_ordering(265) 00:10:51.166 fused_ordering(266) 00:10:51.166 fused_ordering(267) 00:10:51.166 fused_ordering(268) 00:10:51.166 fused_ordering(269) 00:10:51.166 fused_ordering(270) 00:10:51.166 fused_ordering(271) 00:10:51.166 fused_ordering(272) 00:10:51.166 fused_ordering(273) 00:10:51.166 fused_ordering(274) 00:10:51.166 fused_ordering(275) 00:10:51.166 fused_ordering(276) 00:10:51.166 fused_ordering(277) 00:10:51.166 fused_ordering(278) 00:10:51.166 fused_ordering(279) 00:10:51.166 fused_ordering(280) 00:10:51.166 fused_ordering(281) 00:10:51.166 fused_ordering(282) 00:10:51.166 fused_ordering(283) 00:10:51.166 fused_ordering(284) 00:10:51.166 fused_ordering(285) 00:10:51.166 fused_ordering(286) 00:10:51.166 fused_ordering(287) 00:10:51.166 fused_ordering(288) 00:10:51.166 fused_ordering(289) 00:10:51.166 fused_ordering(290) 00:10:51.166 fused_ordering(291) 00:10:51.166 fused_ordering(292) 00:10:51.166 fused_ordering(293) 00:10:51.166 fused_ordering(294) 00:10:51.166 fused_ordering(295) 00:10:51.166 fused_ordering(296) 00:10:51.166 fused_ordering(297) 00:10:51.166 fused_ordering(298) 00:10:51.166 fused_ordering(299) 00:10:51.166 fused_ordering(300) 00:10:51.166 fused_ordering(301) 00:10:51.166 fused_ordering(302) 00:10:51.166 fused_ordering(303) 00:10:51.166 fused_ordering(304) 00:10:51.166 fused_ordering(305) 00:10:51.166 fused_ordering(306) 00:10:51.166 fused_ordering(307) 00:10:51.166 fused_ordering(308) 00:10:51.166 fused_ordering(309) 00:10:51.166 fused_ordering(310) 00:10:51.166 fused_ordering(311) 00:10:51.166 fused_ordering(312) 00:10:51.166 fused_ordering(313) 00:10:51.166 fused_ordering(314) 00:10:51.166 fused_ordering(315) 00:10:51.166 fused_ordering(316) 00:10:51.166 fused_ordering(317) 00:10:51.166 fused_ordering(318) 00:10:51.166 fused_ordering(319) 00:10:51.166 fused_ordering(320) 00:10:51.166 fused_ordering(321) 00:10:51.166 fused_ordering(322) 00:10:51.166 fused_ordering(323) 00:10:51.166 fused_ordering(324) 00:10:51.166 fused_ordering(325) 00:10:51.166 fused_ordering(326) 00:10:51.166 fused_ordering(327) 00:10:51.166 fused_ordering(328) 00:10:51.166 fused_ordering(329) 00:10:51.166 fused_ordering(330) 00:10:51.166 fused_ordering(331) 00:10:51.166 fused_ordering(332) 00:10:51.166 fused_ordering(333) 00:10:51.166 fused_ordering(334) 00:10:51.166 fused_ordering(335) 00:10:51.166 fused_ordering(336) 00:10:51.166 fused_ordering(337) 00:10:51.166 fused_ordering(338) 00:10:51.166 fused_ordering(339) 00:10:51.166 fused_ordering(340) 00:10:51.166 fused_ordering(341) 00:10:51.166 fused_ordering(342) 00:10:51.166 fused_ordering(343) 00:10:51.166 fused_ordering(344) 00:10:51.166 fused_ordering(345) 00:10:51.166 fused_ordering(346) 00:10:51.166 fused_ordering(347) 00:10:51.166 fused_ordering(348) 00:10:51.167 fused_ordering(349) 00:10:51.167 fused_ordering(350) 00:10:51.167 fused_ordering(351) 00:10:51.167 fused_ordering(352) 00:10:51.167 fused_ordering(353) 00:10:51.167 fused_ordering(354) 00:10:51.167 fused_ordering(355) 00:10:51.167 fused_ordering(356) 00:10:51.167 fused_ordering(357) 00:10:51.167 fused_ordering(358) 00:10:51.167 fused_ordering(359) 00:10:51.167 fused_ordering(360) 00:10:51.167 fused_ordering(361) 00:10:51.167 fused_ordering(362) 00:10:51.167 fused_ordering(363) 00:10:51.167 fused_ordering(364) 00:10:51.167 fused_ordering(365) 00:10:51.167 fused_ordering(366) 00:10:51.167 fused_ordering(367) 00:10:51.167 fused_ordering(368) 00:10:51.167 fused_ordering(369) 00:10:51.167 fused_ordering(370) 00:10:51.167 fused_ordering(371) 00:10:51.167 fused_ordering(372) 00:10:51.167 fused_ordering(373) 00:10:51.167 fused_ordering(374) 00:10:51.167 fused_ordering(375) 00:10:51.167 fused_ordering(376) 00:10:51.167 fused_ordering(377) 00:10:51.167 fused_ordering(378) 00:10:51.167 fused_ordering(379) 00:10:51.167 fused_ordering(380) 00:10:51.167 fused_ordering(381) 00:10:51.167 fused_ordering(382) 00:10:51.167 fused_ordering(383) 00:10:51.167 fused_ordering(384) 00:10:51.167 fused_ordering(385) 00:10:51.167 fused_ordering(386) 00:10:51.167 fused_ordering(387) 00:10:51.167 fused_ordering(388) 00:10:51.167 fused_ordering(389) 00:10:51.167 fused_ordering(390) 00:10:51.167 fused_ordering(391) 00:10:51.167 fused_ordering(392) 00:10:51.167 fused_ordering(393) 00:10:51.167 fused_ordering(394) 00:10:51.167 fused_ordering(395) 00:10:51.167 fused_ordering(396) 00:10:51.167 fused_ordering(397) 00:10:51.167 fused_ordering(398) 00:10:51.167 fused_ordering(399) 00:10:51.167 fused_ordering(400) 00:10:51.167 fused_ordering(401) 00:10:51.167 fused_ordering(402) 00:10:51.167 fused_ordering(403) 00:10:51.167 fused_ordering(404) 00:10:51.167 fused_ordering(405) 00:10:51.167 fused_ordering(406) 00:10:51.167 fused_ordering(407) 00:10:51.167 fused_ordering(408) 00:10:51.167 fused_ordering(409) 00:10:51.167 fused_ordering(410) 00:10:51.741 fused_ordering(411) 00:10:51.741 fused_ordering(412) 00:10:51.741 fused_ordering(413) 00:10:51.741 fused_ordering(414) 00:10:51.741 fused_ordering(415) 00:10:51.741 fused_ordering(416) 00:10:51.741 fused_ordering(417) 00:10:51.741 fused_ordering(418) 00:10:51.741 fused_ordering(419) 00:10:51.741 fused_ordering(420) 00:10:51.741 fused_ordering(421) 00:10:51.741 fused_ordering(422) 00:10:51.741 fused_ordering(423) 00:10:51.741 fused_ordering(424) 00:10:51.741 fused_ordering(425) 00:10:51.741 fused_ordering(426) 00:10:51.741 fused_ordering(427) 00:10:51.741 fused_ordering(428) 00:10:51.741 fused_ordering(429) 00:10:51.741 fused_ordering(430) 00:10:51.741 fused_ordering(431) 00:10:51.741 fused_ordering(432) 00:10:51.741 fused_ordering(433) 00:10:51.741 fused_ordering(434) 00:10:51.741 fused_ordering(435) 00:10:51.741 fused_ordering(436) 00:10:51.741 fused_ordering(437) 00:10:51.741 fused_ordering(438) 00:10:51.741 fused_ordering(439) 00:10:51.741 fused_ordering(440) 00:10:51.741 fused_ordering(441) 00:10:51.741 fused_ordering(442) 00:10:51.741 fused_ordering(443) 00:10:51.741 fused_ordering(444) 00:10:51.741 fused_ordering(445) 00:10:51.741 fused_ordering(446) 00:10:51.741 fused_ordering(447) 00:10:51.741 fused_ordering(448) 00:10:51.741 fused_ordering(449) 00:10:51.741 fused_ordering(450) 00:10:51.741 fused_ordering(451) 00:10:51.741 fused_ordering(452) 00:10:51.741 fused_ordering(453) 00:10:51.741 fused_ordering(454) 00:10:51.741 fused_ordering(455) 00:10:51.741 fused_ordering(456) 00:10:51.741 fused_ordering(457) 00:10:51.741 fused_ordering(458) 00:10:51.741 fused_ordering(459) 00:10:51.741 fused_ordering(460) 00:10:51.741 fused_ordering(461) 00:10:51.741 fused_ordering(462) 00:10:51.741 fused_ordering(463) 00:10:51.741 fused_ordering(464) 00:10:51.741 fused_ordering(465) 00:10:51.741 fused_ordering(466) 00:10:51.741 fused_ordering(467) 00:10:51.741 fused_ordering(468) 00:10:51.741 fused_ordering(469) 00:10:51.741 fused_ordering(470) 00:10:51.741 fused_ordering(471) 00:10:51.741 fused_ordering(472) 00:10:51.741 fused_ordering(473) 00:10:51.741 fused_ordering(474) 00:10:51.741 fused_ordering(475) 00:10:51.741 fused_ordering(476) 00:10:51.741 fused_ordering(477) 00:10:51.741 fused_ordering(478) 00:10:51.741 fused_ordering(479) 00:10:51.741 fused_ordering(480) 00:10:51.741 fused_ordering(481) 00:10:51.741 fused_ordering(482) 00:10:51.741 fused_ordering(483) 00:10:51.741 fused_ordering(484) 00:10:51.741 fused_ordering(485) 00:10:51.741 fused_ordering(486) 00:10:51.741 fused_ordering(487) 00:10:51.741 fused_ordering(488) 00:10:51.741 fused_ordering(489) 00:10:51.741 fused_ordering(490) 00:10:51.741 fused_ordering(491) 00:10:51.741 fused_ordering(492) 00:10:51.741 fused_ordering(493) 00:10:51.741 fused_ordering(494) 00:10:51.741 fused_ordering(495) 00:10:51.741 fused_ordering(496) 00:10:51.741 fused_ordering(497) 00:10:51.741 fused_ordering(498) 00:10:51.741 fused_ordering(499) 00:10:51.741 fused_ordering(500) 00:10:51.741 fused_ordering(501) 00:10:51.741 fused_ordering(502) 00:10:51.741 fused_ordering(503) 00:10:51.741 fused_ordering(504) 00:10:51.741 fused_ordering(505) 00:10:51.741 fused_ordering(506) 00:10:51.741 fused_ordering(507) 00:10:51.741 fused_ordering(508) 00:10:51.741 fused_ordering(509) 00:10:51.741 fused_ordering(510) 00:10:51.741 fused_ordering(511) 00:10:51.741 fused_ordering(512) 00:10:51.741 fused_ordering(513) 00:10:51.741 fused_ordering(514) 00:10:51.741 fused_ordering(515) 00:10:51.741 fused_ordering(516) 00:10:51.741 fused_ordering(517) 00:10:51.741 fused_ordering(518) 00:10:51.741 fused_ordering(519) 00:10:51.741 fused_ordering(520) 00:10:51.741 fused_ordering(521) 00:10:51.741 fused_ordering(522) 00:10:51.741 fused_ordering(523) 00:10:51.741 fused_ordering(524) 00:10:51.741 fused_ordering(525) 00:10:51.741 fused_ordering(526) 00:10:51.741 fused_ordering(527) 00:10:51.741 fused_ordering(528) 00:10:51.741 fused_ordering(529) 00:10:51.741 fused_ordering(530) 00:10:51.741 fused_ordering(531) 00:10:51.741 fused_ordering(532) 00:10:51.741 fused_ordering(533) 00:10:51.741 fused_ordering(534) 00:10:51.741 fused_ordering(535) 00:10:51.741 fused_ordering(536) 00:10:51.741 fused_ordering(537) 00:10:51.741 fused_ordering(538) 00:10:51.741 fused_ordering(539) 00:10:51.741 fused_ordering(540) 00:10:51.741 fused_ordering(541) 00:10:51.741 fused_ordering(542) 00:10:51.741 fused_ordering(543) 00:10:51.741 fused_ordering(544) 00:10:51.741 fused_ordering(545) 00:10:51.741 fused_ordering(546) 00:10:51.741 fused_ordering(547) 00:10:51.741 fused_ordering(548) 00:10:51.741 fused_ordering(549) 00:10:51.741 fused_ordering(550) 00:10:51.741 fused_ordering(551) 00:10:51.741 fused_ordering(552) 00:10:51.741 fused_ordering(553) 00:10:51.741 fused_ordering(554) 00:10:51.741 fused_ordering(555) 00:10:51.741 fused_ordering(556) 00:10:51.741 fused_ordering(557) 00:10:51.741 fused_ordering(558) 00:10:51.741 fused_ordering(559) 00:10:51.741 fused_ordering(560) 00:10:51.741 fused_ordering(561) 00:10:51.741 fused_ordering(562) 00:10:51.741 fused_ordering(563) 00:10:51.741 fused_ordering(564) 00:10:51.741 fused_ordering(565) 00:10:51.741 fused_ordering(566) 00:10:51.741 fused_ordering(567) 00:10:51.741 fused_ordering(568) 00:10:51.741 fused_ordering(569) 00:10:51.741 fused_ordering(570) 00:10:51.741 fused_ordering(571) 00:10:51.741 fused_ordering(572) 00:10:51.741 fused_ordering(573) 00:10:51.741 fused_ordering(574) 00:10:51.741 fused_ordering(575) 00:10:51.741 fused_ordering(576) 00:10:51.741 fused_ordering(577) 00:10:51.741 fused_ordering(578) 00:10:51.741 fused_ordering(579) 00:10:51.741 fused_ordering(580) 00:10:51.741 fused_ordering(581) 00:10:51.741 fused_ordering(582) 00:10:51.741 fused_ordering(583) 00:10:51.741 fused_ordering(584) 00:10:51.741 fused_ordering(585) 00:10:51.741 fused_ordering(586) 00:10:51.741 fused_ordering(587) 00:10:51.741 fused_ordering(588) 00:10:51.741 fused_ordering(589) 00:10:51.741 fused_ordering(590) 00:10:51.741 fused_ordering(591) 00:10:51.741 fused_ordering(592) 00:10:51.741 fused_ordering(593) 00:10:51.741 fused_ordering(594) 00:10:51.741 fused_ordering(595) 00:10:51.741 fused_ordering(596) 00:10:51.741 fused_ordering(597) 00:10:51.741 fused_ordering(598) 00:10:51.741 fused_ordering(599) 00:10:51.741 fused_ordering(600) 00:10:51.741 fused_ordering(601) 00:10:51.741 fused_ordering(602) 00:10:51.741 fused_ordering(603) 00:10:51.741 fused_ordering(604) 00:10:51.741 fused_ordering(605) 00:10:51.741 fused_ordering(606) 00:10:51.741 fused_ordering(607) 00:10:51.741 fused_ordering(608) 00:10:51.741 fused_ordering(609) 00:10:51.741 fused_ordering(610) 00:10:51.741 fused_ordering(611) 00:10:51.741 fused_ordering(612) 00:10:51.741 fused_ordering(613) 00:10:51.741 fused_ordering(614) 00:10:51.741 fused_ordering(615) 00:10:52.469 fused_ordering(616) 00:10:52.469 fused_ordering(617) 00:10:52.469 fused_ordering(618) 00:10:52.469 fused_ordering(619) 00:10:52.469 fused_ordering(620) 00:10:52.469 fused_ordering(621) 00:10:52.469 fused_ordering(622) 00:10:52.469 fused_ordering(623) 00:10:52.469 fused_ordering(624) 00:10:52.469 fused_ordering(625) 00:10:52.469 fused_ordering(626) 00:10:52.469 fused_ordering(627) 00:10:52.469 fused_ordering(628) 00:10:52.469 fused_ordering(629) 00:10:52.469 fused_ordering(630) 00:10:52.469 fused_ordering(631) 00:10:52.469 fused_ordering(632) 00:10:52.469 fused_ordering(633) 00:10:52.469 fused_ordering(634) 00:10:52.469 fused_ordering(635) 00:10:52.469 fused_ordering(636) 00:10:52.469 fused_ordering(637) 00:10:52.469 fused_ordering(638) 00:10:52.469 fused_ordering(639) 00:10:52.469 fused_ordering(640) 00:10:52.469 fused_ordering(641) 00:10:52.469 fused_ordering(642) 00:10:52.469 fused_ordering(643) 00:10:52.469 fused_ordering(644) 00:10:52.469 fused_ordering(645) 00:10:52.469 fused_ordering(646) 00:10:52.469 fused_ordering(647) 00:10:52.469 fused_ordering(648) 00:10:52.469 fused_ordering(649) 00:10:52.469 fused_ordering(650) 00:10:52.469 fused_ordering(651) 00:10:52.469 fused_ordering(652) 00:10:52.469 fused_ordering(653) 00:10:52.469 fused_ordering(654) 00:10:52.469 fused_ordering(655) 00:10:52.469 fused_ordering(656) 00:10:52.469 fused_ordering(657) 00:10:52.469 fused_ordering(658) 00:10:52.469 fused_ordering(659) 00:10:52.469 fused_ordering(660) 00:10:52.469 fused_ordering(661) 00:10:52.469 fused_ordering(662) 00:10:52.469 fused_ordering(663) 00:10:52.469 fused_ordering(664) 00:10:52.469 fused_ordering(665) 00:10:52.469 fused_ordering(666) 00:10:52.469 fused_ordering(667) 00:10:52.469 fused_ordering(668) 00:10:52.469 fused_ordering(669) 00:10:52.469 fused_ordering(670) 00:10:52.469 fused_ordering(671) 00:10:52.469 fused_ordering(672) 00:10:52.469 fused_ordering(673) 00:10:52.469 fused_ordering(674) 00:10:52.469 fused_ordering(675) 00:10:52.469 fused_ordering(676) 00:10:52.469 fused_ordering(677) 00:10:52.469 fused_ordering(678) 00:10:52.469 fused_ordering(679) 00:10:52.469 fused_ordering(680) 00:10:52.469 fused_ordering(681) 00:10:52.469 fused_ordering(682) 00:10:52.469 fused_ordering(683) 00:10:52.469 fused_ordering(684) 00:10:52.469 fused_ordering(685) 00:10:52.469 fused_ordering(686) 00:10:52.469 fused_ordering(687) 00:10:52.469 fused_ordering(688) 00:10:52.469 fused_ordering(689) 00:10:52.469 fused_ordering(690) 00:10:52.469 fused_ordering(691) 00:10:52.469 fused_ordering(692) 00:10:52.469 fused_ordering(693) 00:10:52.469 fused_ordering(694) 00:10:52.469 fused_ordering(695) 00:10:52.469 fused_ordering(696) 00:10:52.469 fused_ordering(697) 00:10:52.469 fused_ordering(698) 00:10:52.469 fused_ordering(699) 00:10:52.469 fused_ordering(700) 00:10:52.469 fused_ordering(701) 00:10:52.469 fused_ordering(702) 00:10:52.469 fused_ordering(703) 00:10:52.469 fused_ordering(704) 00:10:52.469 fused_ordering(705) 00:10:52.469 fused_ordering(706) 00:10:52.469 fused_ordering(707) 00:10:52.469 fused_ordering(708) 00:10:52.469 fused_ordering(709) 00:10:52.469 fused_ordering(710) 00:10:52.469 fused_ordering(711) 00:10:52.469 fused_ordering(712) 00:10:52.469 fused_ordering(713) 00:10:52.469 fused_ordering(714) 00:10:52.469 fused_ordering(715) 00:10:52.469 fused_ordering(716) 00:10:52.469 fused_ordering(717) 00:10:52.469 fused_ordering(718) 00:10:52.469 fused_ordering(719) 00:10:52.469 fused_ordering(720) 00:10:52.469 fused_ordering(721) 00:10:52.469 fused_ordering(722) 00:10:52.469 fused_ordering(723) 00:10:52.469 fused_ordering(724) 00:10:52.469 fused_ordering(725) 00:10:52.469 fused_ordering(726) 00:10:52.469 fused_ordering(727) 00:10:52.469 fused_ordering(728) 00:10:52.469 fused_ordering(729) 00:10:52.469 fused_ordering(730) 00:10:52.469 fused_ordering(731) 00:10:52.469 fused_ordering(732) 00:10:52.469 fused_ordering(733) 00:10:52.469 fused_ordering(734) 00:10:52.469 fused_ordering(735) 00:10:52.469 fused_ordering(736) 00:10:52.469 fused_ordering(737) 00:10:52.469 fused_ordering(738) 00:10:52.469 fused_ordering(739) 00:10:52.469 fused_ordering(740) 00:10:52.469 fused_ordering(741) 00:10:52.469 fused_ordering(742) 00:10:52.469 fused_ordering(743) 00:10:52.469 fused_ordering(744) 00:10:52.469 fused_ordering(745) 00:10:52.469 fused_ordering(746) 00:10:52.469 fused_ordering(747) 00:10:52.469 fused_ordering(748) 00:10:52.469 fused_ordering(749) 00:10:52.469 fused_ordering(750) 00:10:52.469 fused_ordering(751) 00:10:52.469 fused_ordering(752) 00:10:52.469 fused_ordering(753) 00:10:52.469 fused_ordering(754) 00:10:52.469 fused_ordering(755) 00:10:52.469 fused_ordering(756) 00:10:52.469 fused_ordering(757) 00:10:52.469 fused_ordering(758) 00:10:52.469 fused_ordering(759) 00:10:52.469 fused_ordering(760) 00:10:52.469 fused_ordering(761) 00:10:52.469 fused_ordering(762) 00:10:52.469 fused_ordering(763) 00:10:52.469 fused_ordering(764) 00:10:52.469 fused_ordering(765) 00:10:52.469 fused_ordering(766) 00:10:52.469 fused_ordering(767) 00:10:52.469 fused_ordering(768) 00:10:52.469 fused_ordering(769) 00:10:52.469 fused_ordering(770) 00:10:52.469 fused_ordering(771) 00:10:52.469 fused_ordering(772) 00:10:52.469 fused_ordering(773) 00:10:52.469 fused_ordering(774) 00:10:52.469 fused_ordering(775) 00:10:52.469 fused_ordering(776) 00:10:52.469 fused_ordering(777) 00:10:52.469 fused_ordering(778) 00:10:52.469 fused_ordering(779) 00:10:52.469 fused_ordering(780) 00:10:52.469 fused_ordering(781) 00:10:52.469 fused_ordering(782) 00:10:52.469 fused_ordering(783) 00:10:52.469 fused_ordering(784) 00:10:52.469 fused_ordering(785) 00:10:52.469 fused_ordering(786) 00:10:52.469 fused_ordering(787) 00:10:52.469 fused_ordering(788) 00:10:52.469 fused_ordering(789) 00:10:52.469 fused_ordering(790) 00:10:52.469 fused_ordering(791) 00:10:52.469 fused_ordering(792) 00:10:52.469 fused_ordering(793) 00:10:52.469 fused_ordering(794) 00:10:52.469 fused_ordering(795) 00:10:52.469 fused_ordering(796) 00:10:52.469 fused_ordering(797) 00:10:52.469 fused_ordering(798) 00:10:52.469 fused_ordering(799) 00:10:52.469 fused_ordering(800) 00:10:52.469 fused_ordering(801) 00:10:52.469 fused_ordering(802) 00:10:52.469 fused_ordering(803) 00:10:52.469 fused_ordering(804) 00:10:52.469 fused_ordering(805) 00:10:52.469 fused_ordering(806) 00:10:52.469 fused_ordering(807) 00:10:52.469 fused_ordering(808) 00:10:52.469 fused_ordering(809) 00:10:52.469 fused_ordering(810) 00:10:52.469 fused_ordering(811) 00:10:52.469 fused_ordering(812) 00:10:52.469 fused_ordering(813) 00:10:52.469 fused_ordering(814) 00:10:52.469 fused_ordering(815) 00:10:52.469 fused_ordering(816) 00:10:52.469 fused_ordering(817) 00:10:52.469 fused_ordering(818) 00:10:52.469 fused_ordering(819) 00:10:52.469 fused_ordering(820) 00:10:53.036 fused_ordering(821) 00:10:53.036 fused_ordering(822) 00:10:53.036 fused_ordering(823) 00:10:53.036 fused_ordering(824) 00:10:53.036 fused_ordering(825) 00:10:53.036 fused_ordering(826) 00:10:53.036 fused_ordering(827) 00:10:53.036 fused_ordering(828) 00:10:53.036 fused_ordering(829) 00:10:53.036 fused_ordering(830) 00:10:53.036 fused_ordering(831) 00:10:53.036 fused_ordering(832) 00:10:53.036 fused_ordering(833) 00:10:53.036 fused_ordering(834) 00:10:53.036 fused_ordering(835) 00:10:53.036 fused_ordering(836) 00:10:53.036 fused_ordering(837) 00:10:53.036 fused_ordering(838) 00:10:53.036 fused_ordering(839) 00:10:53.036 fused_ordering(840) 00:10:53.036 fused_ordering(841) 00:10:53.036 fused_ordering(842) 00:10:53.036 fused_ordering(843) 00:10:53.036 fused_ordering(844) 00:10:53.036 fused_ordering(845) 00:10:53.036 fused_ordering(846) 00:10:53.036 fused_ordering(847) 00:10:53.036 fused_ordering(848) 00:10:53.036 fused_ordering(849) 00:10:53.036 fused_ordering(850) 00:10:53.036 fused_ordering(851) 00:10:53.036 fused_ordering(852) 00:10:53.036 fused_ordering(853) 00:10:53.036 fused_ordering(854) 00:10:53.036 fused_ordering(855) 00:10:53.036 fused_ordering(856) 00:10:53.036 fused_ordering(857) 00:10:53.036 fused_ordering(858) 00:10:53.036 fused_ordering(859) 00:10:53.036 fused_ordering(860) 00:10:53.036 fused_ordering(861) 00:10:53.036 fused_ordering(862) 00:10:53.036 fused_ordering(863) 00:10:53.036 fused_ordering(864) 00:10:53.036 fused_ordering(865) 00:10:53.036 fused_ordering(866) 00:10:53.036 fused_ordering(867) 00:10:53.036 fused_ordering(868) 00:10:53.036 fused_ordering(869) 00:10:53.036 fused_ordering(870) 00:10:53.036 fused_ordering(871) 00:10:53.036 fused_ordering(872) 00:10:53.036 fused_ordering(873) 00:10:53.036 fused_ordering(874) 00:10:53.036 fused_ordering(875) 00:10:53.036 fused_ordering(876) 00:10:53.036 fused_ordering(877) 00:10:53.036 fused_ordering(878) 00:10:53.036 fused_ordering(879) 00:10:53.036 fused_ordering(880) 00:10:53.036 fused_ordering(881) 00:10:53.036 fused_ordering(882) 00:10:53.036 fused_ordering(883) 00:10:53.036 fused_ordering(884) 00:10:53.036 fused_ordering(885) 00:10:53.036 fused_ordering(886) 00:10:53.036 fused_ordering(887) 00:10:53.036 fused_ordering(888) 00:10:53.036 fused_ordering(889) 00:10:53.036 fused_ordering(890) 00:10:53.036 fused_ordering(891) 00:10:53.036 fused_ordering(892) 00:10:53.036 fused_ordering(893) 00:10:53.036 fused_ordering(894) 00:10:53.036 fused_ordering(895) 00:10:53.036 fused_ordering(896) 00:10:53.036 fused_ordering(897) 00:10:53.036 fused_ordering(898) 00:10:53.036 fused_ordering(899) 00:10:53.036 fused_ordering(900) 00:10:53.036 fused_ordering(901) 00:10:53.036 fused_ordering(902) 00:10:53.036 fused_ordering(903) 00:10:53.036 fused_ordering(904) 00:10:53.036 fused_ordering(905) 00:10:53.036 fused_ordering(906) 00:10:53.036 fused_ordering(907) 00:10:53.036 fused_ordering(908) 00:10:53.036 fused_ordering(909) 00:10:53.036 fused_ordering(910) 00:10:53.036 fused_ordering(911) 00:10:53.036 fused_ordering(912) 00:10:53.036 fused_ordering(913) 00:10:53.036 fused_ordering(914) 00:10:53.036 fused_ordering(915) 00:10:53.036 fused_ordering(916) 00:10:53.036 fused_ordering(917) 00:10:53.036 fused_ordering(918) 00:10:53.036 fused_ordering(919) 00:10:53.036 fused_ordering(920) 00:10:53.036 fused_ordering(921) 00:10:53.036 fused_ordering(922) 00:10:53.036 fused_ordering(923) 00:10:53.036 fused_ordering(924) 00:10:53.036 fused_ordering(925) 00:10:53.036 fused_ordering(926) 00:10:53.036 fused_ordering(927) 00:10:53.036 fused_ordering(928) 00:10:53.036 fused_ordering(929) 00:10:53.036 fused_ordering(930) 00:10:53.036 fused_ordering(931) 00:10:53.036 fused_ordering(932) 00:10:53.036 fused_ordering(933) 00:10:53.036 fused_ordering(934) 00:10:53.036 fused_ordering(935) 00:10:53.036 fused_ordering(936) 00:10:53.036 fused_ordering(937) 00:10:53.036 fused_ordering(938) 00:10:53.036 fused_ordering(939) 00:10:53.036 fused_ordering(940) 00:10:53.036 fused_ordering(941) 00:10:53.036 fused_ordering(942) 00:10:53.036 fused_ordering(943) 00:10:53.036 fused_ordering(944) 00:10:53.036 fused_ordering(945) 00:10:53.036 fused_ordering(946) 00:10:53.036 fused_ordering(947) 00:10:53.036 fused_ordering(948) 00:10:53.036 fused_ordering(949) 00:10:53.036 fused_ordering(950) 00:10:53.036 fused_ordering(951) 00:10:53.036 fused_ordering(952) 00:10:53.036 fused_ordering(953) 00:10:53.036 fused_ordering(954) 00:10:53.036 fused_ordering(955) 00:10:53.036 fused_ordering(956) 00:10:53.036 fused_ordering(957) 00:10:53.036 fused_ordering(958) 00:10:53.036 fused_ordering(959) 00:10:53.036 fused_ordering(960) 00:10:53.036 fused_ordering(961) 00:10:53.036 fused_ordering(962) 00:10:53.036 fused_ordering(963) 00:10:53.036 fused_ordering(964) 00:10:53.036 fused_ordering(965) 00:10:53.036 fused_ordering(966) 00:10:53.036 fused_ordering(967) 00:10:53.036 fused_ordering(968) 00:10:53.036 fused_ordering(969) 00:10:53.036 fused_ordering(970) 00:10:53.036 fused_ordering(971) 00:10:53.036 fused_ordering(972) 00:10:53.036 fused_ordering(973) 00:10:53.036 fused_ordering(974) 00:10:53.036 fused_ordering(975) 00:10:53.036 fused_ordering(976) 00:10:53.036 fused_ordering(977) 00:10:53.036 fused_ordering(978) 00:10:53.036 fused_ordering(979) 00:10:53.036 fused_ordering(980) 00:10:53.036 fused_ordering(981) 00:10:53.036 fused_ordering(982) 00:10:53.036 fused_ordering(983) 00:10:53.036 fused_ordering(984) 00:10:53.036 fused_ordering(985) 00:10:53.036 fused_ordering(986) 00:10:53.036 fused_ordering(987) 00:10:53.036 fused_ordering(988) 00:10:53.036 fused_ordering(989) 00:10:53.036 fused_ordering(990) 00:10:53.036 fused_ordering(991) 00:10:53.036 fused_ordering(992) 00:10:53.036 fused_ordering(993) 00:10:53.036 fused_ordering(994) 00:10:53.036 fused_ordering(995) 00:10:53.036 fused_ordering(996) 00:10:53.036 fused_ordering(997) 00:10:53.036 fused_ordering(998) 00:10:53.036 fused_ordering(999) 00:10:53.036 fused_ordering(1000) 00:10:53.036 fused_ordering(1001) 00:10:53.036 fused_ordering(1002) 00:10:53.036 fused_ordering(1003) 00:10:53.036 fused_ordering(1004) 00:10:53.036 fused_ordering(1005) 00:10:53.036 fused_ordering(1006) 00:10:53.036 fused_ordering(1007) 00:10:53.036 fused_ordering(1008) 00:10:53.036 fused_ordering(1009) 00:10:53.036 fused_ordering(1010) 00:10:53.036 fused_ordering(1011) 00:10:53.036 fused_ordering(1012) 00:10:53.036 fused_ordering(1013) 00:10:53.036 fused_ordering(1014) 00:10:53.036 fused_ordering(1015) 00:10:53.036 fused_ordering(1016) 00:10:53.036 fused_ordering(1017) 00:10:53.036 fused_ordering(1018) 00:10:53.036 fused_ordering(1019) 00:10:53.036 fused_ordering(1020) 00:10:53.036 fused_ordering(1021) 00:10:53.036 fused_ordering(1022) 00:10:53.036 fused_ordering(1023) 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.036 rmmod nvme_tcp 00:10:53.036 rmmod nvme_fabrics 00:10:53.036 rmmod nvme_keyring 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3827359 ']' 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3827359 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3827359 ']' 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3827359 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.036 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3827359 00:10:53.294 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:53.294 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:53.294 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3827359' 00:10:53.294 killing process with pid 3827359 00:10:53.294 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3827359 00:10:53.294 12:40:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3827359 00:10:53.294 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.294 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.294 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.294 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.295 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.295 12:40:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.295 12:40:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.295 12:40:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.828 12:40:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.828 00:10:55.828 real 0m12.139s 00:10:55.828 user 0m7.295s 00:10:55.828 sys 0m6.153s 00:10:55.828 12:40:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.828 12:40:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:55.828 ************************************ 00:10:55.828 END TEST nvmf_fused_ordering 00:10:55.828 ************************************ 00:10:55.828 12:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:55.828 12:40:47 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:55.828 12:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.828 12:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.828 12:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.828 ************************************ 00:10:55.828 START TEST nvmf_delete_subsystem 00:10:55.828 ************************************ 00:10:55.828 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:55.828 * Looking for test storage... 00:10:55.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.828 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.829 12:40:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:01.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.102 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:01.103 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.103 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:01.372 Found net devices under 0000:af:00.0: cvl_0_0 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:01.372 Found net devices under 0000:af:00.1: cvl_0_1 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:01.372 00:11:01.372 --- 10.0.0.2 ping statistics --- 00:11:01.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.372 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:11:01.372 00:11:01.372 --- 10.0.0.1 ping statistics --- 00:11:01.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.372 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.372 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3831758 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3831758 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3831758 ']' 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.633 12:40:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 [2024-07-15 12:40:53.402860] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:01.633 [2024-07-15 12:40:53.402921] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.633 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.633 [2024-07-15 12:40:53.491534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.892 [2024-07-15 12:40:53.581678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.892 [2024-07-15 12:40:53.581720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.892 [2024-07-15 12:40:53.581731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.892 [2024-07-15 12:40:53.581740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.892 [2024-07-15 12:40:53.581748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.892 [2024-07-15 12:40:53.581799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.892 [2024-07-15 12:40:53.581809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.459 [2024-07-15 12:40:54.383961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.459 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 [2024-07-15 12:40:54.404447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 NULL1 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 Delay0 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3831971 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:02.718 12:40:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:02.718 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.718 [2024-07-15 12:40:54.505683] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:04.620 12:40:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.620 12:40:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.620 12:40:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 [2024-07-15 12:40:56.645625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4be90 is same with the state(5) to be set 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.879 Write completed with error (sct=0, sc=8) 00:11:04.879 starting I/O failed: -6 00:11:04.879 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 starting I/O failed: -6 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 starting I/O failed: -6 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 starting I/O failed: -6 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 starting I/O failed: -6 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 starting I/O failed: -6 00:11:04.880 [2024-07-15 12:40:56.646670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa0e000d600 is same with the state(5) to be set 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Write completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:04.880 Read completed with error (sct=0, sc=8) 00:11:05.817 [2024-07-15 12:40:57.603174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a500 is same with the state(5) to be set 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 [2024-07-15 12:40:57.647554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4e650 is same with the state(5) to be set 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 [2024-07-15 12:40:57.647845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa0e000d2f0 is same with the state(5) to be set 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 [2024-07-15 12:40:57.648351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ad00 is same with the state(5) to be set 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Read completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.817 Write completed with error (sct=0, sc=8) 00:11:05.818 Write completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 12:40:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.818 Write completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Write completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 Write completed with error (sct=0, sc=8) 00:11:05.818 Read completed with error (sct=0, sc=8) 00:11:05.818 [2024-07-15 12:40:57.649011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4bcb0 is same with the state(5) to be set 00:11:05.818 12:40:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:05.818 12:40:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3831971 00:11:05.818 Initializing NVMe Controllers 00:11:05.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.818 Controller IO queue size 128, less than required. 00:11:05.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:05.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:05.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:05.818 Initialization complete. Launching workers. 00:11:05.818 ======================================================== 00:11:05.818 Latency(us) 00:11:05.818 Device Information : IOPS MiB/s Average min max 00:11:05.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.80 0.10 946341.67 1404.25 1019348.20 00:11:05.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.22 0.08 871164.86 734.35 1020369.21 00:11:05.818 ======================================================== 00:11:05.818 Total : 352.02 0.17 912765.51 734.35 1020369.21 00:11:05.818 00:11:05.818 12:40:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:05.818 [2024-07-15 12:40:57.649928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2a500 (9): Bad file descriptor 00:11:05.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3831971 00:11:06.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3831971) - No such process 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3831971 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3831971 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3831971 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.386 [2024-07-15 12:40:58.177943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3832579 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:06.386 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.386 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.386 [2024-07-15 12:40:58.260333] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:06.953 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.953 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:06.953 12:40:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.519 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.519 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:07.519 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.778 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.778 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:07.778 12:40:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.346 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.346 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:08.346 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.913 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.913 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:08.913 12:41:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.481 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.481 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:09.481 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.740 Initializing NVMe Controllers 00:11:09.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.740 Controller IO queue size 128, less than required. 00:11:09.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:09.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:09.740 Initialization complete. Launching workers. 00:11:09.740 ======================================================== 00:11:09.740 Latency(us) 00:11:09.740 Device Information : IOPS MiB/s Average min max 00:11:09.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005667.42 1000234.42 1041499.12 00:11:09.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006310.37 1000258.80 1040586.97 00:11:09.740 ======================================================== 00:11:09.740 Total : 256.00 0.12 1005988.89 1000234.42 1041499.12 00:11:09.740 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3832579 00:11:09.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3832579) - No such process 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3832579 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.999 rmmod nvme_tcp 00:11:09.999 rmmod nvme_fabrics 00:11:09.999 rmmod nvme_keyring 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3831758 ']' 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3831758 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3831758 ']' 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3831758 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3831758 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3831758' 00:11:09.999 killing process with pid 3831758 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3831758 00:11:09.999 12:41:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3831758 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.258 12:41:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.790 12:41:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.790 00:11:12.790 real 0m16.763s 00:11:12.790 user 0m30.826s 00:11:12.790 sys 0m5.458s 00:11:12.790 12:41:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.790 12:41:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.790 ************************************ 00:11:12.790 END TEST nvmf_delete_subsystem 00:11:12.790 ************************************ 00:11:12.790 12:41:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.790 12:41:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:12.790 12:41:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.790 12:41:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.790 12:41:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.790 ************************************ 00:11:12.790 START TEST nvmf_ns_masking 00:11:12.790 ************************************ 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:12.790 * Looking for test storage... 00:11:12.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.790 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=88236414-c1cc-49fb-a28d-164e8cd606f1 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7f6059fd-b6c2-47ff-b6a7-4c8bac93034f 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1990d24c-67a6-44ab-87bb-7dbddf1f9376 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.791 12:41:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:18.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:18.062 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:18.062 Found net devices under 0000:af:00.0: cvl_0_0 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:18.062 Found net devices under 0000:af:00.1: cvl_0_1 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.062 12:41:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:18.321 00:11:18.321 --- 10.0.0.2 ping statistics --- 00:11:18.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.321 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:11:18.321 00:11:18.321 --- 10.0.0.1 ping statistics --- 00:11:18.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.321 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3836838 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3836838 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3836838 ']' 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.321 12:41:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 [2024-07-15 12:41:10.270309] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:18.581 [2024-07-15 12:41:10.270364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.581 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.581 [2024-07-15 12:41:10.355896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.581 [2024-07-15 12:41:10.445095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.581 [2024-07-15 12:41:10.445138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.581 [2024-07-15 12:41:10.445148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.581 [2024-07-15 12:41:10.445158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.581 [2024-07-15 12:41:10.445165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.581 [2024-07-15 12:41:10.445186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.516 12:41:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.774 [2024-07-15 12:41:11.487990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.774 12:41:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:19.774 12:41:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:19.774 12:41:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:20.033 Malloc1 00:11:20.033 12:41:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:20.307 Malloc2 00:11:20.307 12:41:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.615 12:41:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:20.915 12:41:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.173 [2024-07-15 12:41:12.989069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.173 12:41:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:21.173 12:41:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1990d24c-67a6-44ab-87bb-7dbddf1f9376 -a 10.0.0.2 -s 4420 -i 4 00:11:21.434 12:41:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.434 12:41:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.434 12:41:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.434 12:41:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.434 12:41:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:23.338 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:23.597 [ 0]:0x1 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b6f60af3b4f141c5836ee238a42aa46a 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b6f60af3b4f141c5836ee238a42aa46a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.597 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:23.855 [ 0]:0x1 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b6f60af3b4f141c5836ee238a42aa46a 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b6f60af3b4f141c5836ee238a42aa46a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:23.855 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:23.856 [ 1]:0x2 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.856 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:24.114 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.114 12:41:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.372 12:41:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1990d24c-67a6-44ab-87bb-7dbddf1f9376 -a 10.0.0.2 -s 4420 -i 4 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:24.631 12:41:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.165 [ 0]:0x2 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.165 12:41:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.165 [ 0]:0x1 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.165 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b6f60af3b4f141c5836ee238a42aa46a 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b6f60af3b4f141c5836ee238a42aa46a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.166 [ 1]:0x2 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.166 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.424 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:27.424 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.424 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.683 [ 0]:0x2 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.683 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.942 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:27.942 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1990d24c-67a6-44ab-87bb-7dbddf1f9376 -a 10.0.0.2 -s 4420 -i 4 00:11:28.200 12:41:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:28.201 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.201 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.201 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:28.201 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:28.201 12:41:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:30.099 12:41:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:30.099 [ 0]:0x1 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.099 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b6f60af3b4f141c5836ee238a42aa46a 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b6f60af3b4f141c5836ee238a42aa46a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:30.357 [ 1]:0x2 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.357 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:30.615 [ 0]:0x2 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:30.615 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.873 [2024-07-15 12:41:22.739052] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:30.873 request: 00:11:30.873 { 00:11:30.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.873 "nsid": 2, 00:11:30.873 "host": "nqn.2016-06.io.spdk:host1", 00:11:30.873 "method": "nvmf_ns_remove_host", 00:11:30.873 "req_id": 1 00:11:30.873 } 00:11:30.873 Got JSON-RPC error response 00:11:30.873 response: 00:11:30.873 { 00:11:30.873 "code": -32602, 00:11:30.873 "message": "Invalid parameters" 00:11:30.873 } 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.873 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:31.132 [ 0]:0x2 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a8e21a7c71b401c9f24c8070631fdd4 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a8e21a7c71b401c9f24c8070631fdd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3839349 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3839349 /var/tmp/host.sock 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3839349 ']' 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.132 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:31.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:31.133 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.133 12:41:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.133 [2024-07-15 12:41:22.965787] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:31.133 [2024-07-15 12:41:22.965848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839349 ] 00:11:31.133 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.133 [2024-07-15 12:41:23.048479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.390 [2024-07-15 12:41:23.150433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.325 12:41:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.325 12:41:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:32.325 12:41:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.325 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.583 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 88236414-c1cc-49fb-a28d-164e8cd606f1 00:11:32.583 12:41:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:32.583 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 88236414C1CC49FBA28D164E8CD606F1 -i 00:11:32.842 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7f6059fd-b6c2-47ff-b6a7-4c8bac93034f 00:11:32.842 12:41:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:32.842 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7F6059FDB6C247FFB6A74C8BAC93034F -i 00:11:33.101 12:41:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.360 12:41:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:33.619 12:41:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:33.619 12:41:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:34.182 nvme0n1 00:11:34.182 12:41:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:34.182 12:41:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:34.463 nvme1n2 00:11:34.463 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:34.463 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:34.463 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:34.463 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:34.463 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:34.722 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:34.722 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:34.722 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:34.722 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:34.980 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 88236414-c1cc-49fb-a28d-164e8cd606f1 == \8\8\2\3\6\4\1\4\-\c\1\c\c\-\4\9\f\b\-\a\2\8\d\-\1\6\4\e\8\c\d\6\0\6\f\1 ]] 00:11:34.980 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:34.980 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:34.980 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7f6059fd-b6c2-47ff-b6a7-4c8bac93034f == \7\f\6\0\5\9\f\d\-\b\6\c\2\-\4\7\f\f\-\b\6\a\7\-\4\c\8\b\a\c\9\3\0\3\4\f ]] 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3839349 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3839349 ']' 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3839349 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3839349 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3839349' 00:11:35.239 killing process with pid 3839349 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3839349 00:11:35.239 12:41:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3839349 00:11:35.498 12:41:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:35.757 rmmod nvme_tcp 00:11:35.757 rmmod nvme_fabrics 00:11:35.757 rmmod nvme_keyring 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3836838 ']' 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3836838 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3836838 ']' 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3836838 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:35.757 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3836838 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3836838' 00:11:36.015 killing process with pid 3836838 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3836838 00:11:36.015 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3836838 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.274 12:41:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.179 12:41:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.179 00:11:38.179 real 0m25.870s 00:11:38.179 user 0m30.510s 00:11:38.179 sys 0m6.931s 00:11:38.179 12:41:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.179 12:41:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.179 ************************************ 00:11:38.179 END TEST nvmf_ns_masking 00:11:38.179 ************************************ 00:11:38.179 12:41:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:38.179 12:41:30 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:38.179 12:41:30 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:38.179 12:41:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:38.179 12:41:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.179 12:41:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.438 ************************************ 00:11:38.438 START TEST nvmf_nvme_cli 00:11:38.438 ************************************ 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:38.438 * Looking for test storage... 00:11:38.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.438 12:41:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:45.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.007 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:45.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:45.008 Found net devices under 0000:af:00.0: cvl_0_0 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:45.008 Found net devices under 0000:af:00.1: cvl_0_1 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:45.008 12:41:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:45.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:45.008 00:11:45.008 --- 10.0.0.2 ping statistics --- 00:11:45.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.008 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:11:45.008 00:11:45.008 --- 10.0.0.1 ping statistics --- 00:11:45.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.008 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3843888 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3843888 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3843888 ']' 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.008 12:41:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.008 [2024-07-15 12:41:36.185131] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:45.008 [2024-07-15 12:41:36.185176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.008 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.008 [2024-07-15 12:41:36.259243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.008 [2024-07-15 12:41:36.351632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.008 [2024-07-15 12:41:36.351676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.008 [2024-07-15 12:41:36.351686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.008 [2024-07-15 12:41:36.351695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.008 [2024-07-15 12:41:36.351702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.008 [2024-07-15 12:41:36.351756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.008 [2024-07-15 12:41:36.351868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.008 [2024-07-15 12:41:36.351980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.008 [2024-07-15 12:41:36.351980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 [2024-07-15 12:41:37.097263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 Malloc0 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 Malloc1 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 [2024-07-15 12:41:37.188131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.267 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:45.526 00:11:45.526 Discovery Log Number of Records 2, Generation counter 2 00:11:45.526 =====Discovery Log Entry 0====== 00:11:45.526 trtype: tcp 00:11:45.526 adrfam: ipv4 00:11:45.526 subtype: current discovery subsystem 00:11:45.526 treq: not required 00:11:45.526 portid: 0 00:11:45.526 trsvcid: 4420 00:11:45.526 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:45.526 traddr: 10.0.0.2 00:11:45.526 eflags: explicit discovery connections, duplicate discovery information 00:11:45.526 sectype: none 00:11:45.526 =====Discovery Log Entry 1====== 00:11:45.526 trtype: tcp 00:11:45.526 adrfam: ipv4 00:11:45.526 subtype: nvme subsystem 00:11:45.526 treq: not required 00:11:45.526 portid: 0 00:11:45.526 trsvcid: 4420 00:11:45.526 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:45.526 traddr: 10.0.0.2 00:11:45.526 eflags: none 00:11:45.526 sectype: none 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:45.526 12:41:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:45.527 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:45.527 12:41:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:46.905 12:41:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:48.810 /dev/nvme0n1 ]] 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:48.810 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:49.069 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:49.069 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:49.070 12:41:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.329 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.329 rmmod nvme_tcp 00:11:49.329 rmmod nvme_fabrics 00:11:49.329 rmmod nvme_keyring 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3843888 ']' 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3843888 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3843888 ']' 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3843888 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3843888 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3843888' 00:11:49.588 killing process with pid 3843888 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3843888 00:11:49.588 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3843888 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.846 12:41:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.750 12:41:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.750 00:11:51.750 real 0m13.537s 00:11:51.750 user 0m22.381s 00:11:51.750 sys 0m5.121s 00:11:51.750 12:41:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.750 12:41:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.750 ************************************ 00:11:51.750 END TEST nvmf_nvme_cli 00:11:51.750 ************************************ 00:11:52.009 12:41:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:52.009 12:41:43 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:52.009 12:41:43 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:52.009 12:41:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.009 12:41:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.009 12:41:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.009 ************************************ 00:11:52.009 START TEST nvmf_vfio_user 00:11:52.009 ************************************ 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:52.009 * Looking for test storage... 00:11:52.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:52.009 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3845346 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3845346' 00:11:52.010 Process pid: 3845346 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3845346 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3845346 ']' 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.010 12:41:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:52.010 [2024-07-15 12:41:43.901293] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:52.010 [2024-07-15 12:41:43.901354] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.010 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.268 [2024-07-15 12:41:43.981721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.268 [2024-07-15 12:41:44.074589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.268 [2024-07-15 12:41:44.074624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.268 [2024-07-15 12:41:44.074635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.268 [2024-07-15 12:41:44.074644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.268 [2024-07-15 12:41:44.074651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.268 [2024-07-15 12:41:44.074700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.268 [2024-07-15 12:41:44.074814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.268 [2024-07-15 12:41:44.074924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.268 [2024-07-15 12:41:44.074925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.268 12:41:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.268 12:41:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:52.268 12:41:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:53.644 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:53.902 Malloc1 00:11:53.902 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:54.160 12:41:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:54.418 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:54.688 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:54.688 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:54.688 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:55.027 Malloc2 00:11:55.027 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:55.285 12:41:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:55.542 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:55.803 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:55.803 [2024-07-15 12:41:47.512370] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:11:55.803 [2024-07-15 12:41:47.512407] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846144 ] 00:11:55.803 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.803 [2024-07-15 12:41:47.551822] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:55.803 [2024-07-15 12:41:47.554316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:55.803 [2024-07-15 12:41:47.554340] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa610a89000 00:11:55.803 [2024-07-15 12:41:47.555322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.556328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.557327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.558332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.559338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.560355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.561363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.562369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:55.803 [2024-07-15 12:41:47.563371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:55.803 [2024-07-15 12:41:47.563383] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa610a7e000 00:11:55.803 [2024-07-15 12:41:47.564789] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:55.803 [2024-07-15 12:41:47.585274] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:55.803 [2024-07-15 12:41:47.585304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:55.803 [2024-07-15 12:41:47.587568] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:55.803 [2024-07-15 12:41:47.587619] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:55.803 [2024-07-15 12:41:47.587712] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:55.803 [2024-07-15 12:41:47.587734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:55.803 [2024-07-15 12:41:47.587741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:55.803 [2024-07-15 12:41:47.588569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:55.803 [2024-07-15 12:41:47.588580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:55.803 [2024-07-15 12:41:47.588589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:55.803 [2024-07-15 12:41:47.589577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:55.803 [2024-07-15 12:41:47.589589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:55.803 [2024-07-15 12:41:47.589598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.590590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:55.803 [2024-07-15 12:41:47.590601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.591594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:55.803 [2024-07-15 12:41:47.591605] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:55.803 [2024-07-15 12:41:47.591611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.591620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.591727] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:55.803 [2024-07-15 12:41:47.591733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.591740] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:55.803 [2024-07-15 12:41:47.592607] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:55.803 [2024-07-15 12:41:47.593611] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:55.803 [2024-07-15 12:41:47.594628] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:55.803 [2024-07-15 12:41:47.595621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.803 [2024-07-15 12:41:47.595715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:55.803 [2024-07-15 12:41:47.596642] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:55.803 [2024-07-15 12:41:47.596653] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:55.803 [2024-07-15 12:41:47.596659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:55.803 [2024-07-15 12:41:47.596698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596715] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:55.803 [2024-07-15 12:41:47.596722] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:55.803 [2024-07-15 12:41:47.596738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:55.803 [2024-07-15 12:41:47.596802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:55.803 [2024-07-15 12:41:47.596814] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:55.803 [2024-07-15 12:41:47.596823] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:55.803 [2024-07-15 12:41:47.596829] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:55.803 [2024-07-15 12:41:47.596835] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:55.803 [2024-07-15 12:41:47.596841] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:55.803 [2024-07-15 12:41:47.596847] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:55.803 [2024-07-15 12:41:47.596853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:55.803 [2024-07-15 12:41:47.596897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:55.803 [2024-07-15 12:41:47.596916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.803 [2024-07-15 12:41:47.596927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.803 [2024-07-15 12:41:47.596937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.803 [2024-07-15 12:41:47.596948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.803 [2024-07-15 12:41:47.596954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.596978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:55.803 [2024-07-15 12:41:47.596996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:55.803 [2024-07-15 12:41:47.597003] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:55.803 [2024-07-15 12:41:47.597009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:55.803 [2024-07-15 12:41:47.597057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:55.803 [2024-07-15 12:41:47.597128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597148] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:55.803 [2024-07-15 12:41:47.597153] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:55.803 [2024-07-15 12:41:47.597161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:55.803 [2024-07-15 12:41:47.597186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:55.803 [2024-07-15 12:41:47.597198] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:55.803 [2024-07-15 12:41:47.597209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:55.803 [2024-07-15 12:41:47.597218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597227] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:55.804 [2024-07-15 12:41:47.597233] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:55.804 [2024-07-15 12:41:47.597240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597313] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:55.804 [2024-07-15 12:41:47.597319] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:55.804 [2024-07-15 12:41:47.597327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597399] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:55.804 [2024-07-15 12:41:47.597405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:55.804 [2024-07-15 12:41:47.597411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:55.804 [2024-07-15 12:41:47.597431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597565] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:55.804 [2024-07-15 12:41:47.597571] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:55.804 [2024-07-15 12:41:47.597576] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:55.804 [2024-07-15 12:41:47.597580] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:55.804 [2024-07-15 12:41:47.597588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:55.804 [2024-07-15 12:41:47.597597] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:55.804 [2024-07-15 12:41:47.597603] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:55.804 [2024-07-15 12:41:47.597610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597621] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:55.804 [2024-07-15 12:41:47.597627] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:55.804 [2024-07-15 12:41:47.597635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597644] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:55.804 [2024-07-15 12:41:47.597650] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:55.804 [2024-07-15 12:41:47.597657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:55.804 [2024-07-15 12:41:47.597666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:55.804 [2024-07-15 12:41:47.597704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:55.804 ===================================================== 00:11:55.804 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:55.804 ===================================================== 00:11:55.804 Controller Capabilities/Features 00:11:55.804 ================================ 00:11:55.804 Vendor ID: 4e58 00:11:55.804 Subsystem Vendor ID: 4e58 00:11:55.804 Serial Number: SPDK1 00:11:55.804 Model Number: SPDK bdev Controller 00:11:55.804 Firmware Version: 24.09 00:11:55.804 Recommended Arb Burst: 6 00:11:55.804 IEEE OUI Identifier: 8d 6b 50 00:11:55.804 Multi-path I/O 00:11:55.804 May have multiple subsystem ports: Yes 00:11:55.804 May have multiple controllers: Yes 00:11:55.804 Associated with SR-IOV VF: No 00:11:55.804 Max Data Transfer Size: 131072 00:11:55.804 Max Number of Namespaces: 32 00:11:55.804 Max Number of I/O Queues: 127 00:11:55.804 NVMe Specification Version (VS): 1.3 00:11:55.804 NVMe Specification Version (Identify): 1.3 00:11:55.804 Maximum Queue Entries: 256 00:11:55.804 Contiguous Queues Required: Yes 00:11:55.804 Arbitration Mechanisms Supported 00:11:55.804 Weighted Round Robin: Not Supported 00:11:55.804 Vendor Specific: Not Supported 00:11:55.804 Reset Timeout: 15000 ms 00:11:55.804 Doorbell Stride: 4 bytes 00:11:55.804 NVM Subsystem Reset: Not Supported 00:11:55.804 Command Sets Supported 00:11:55.804 NVM Command Set: Supported 00:11:55.804 Boot Partition: Not Supported 00:11:55.804 Memory Page Size Minimum: 4096 bytes 00:11:55.804 Memory Page Size Maximum: 4096 bytes 00:11:55.804 Persistent Memory Region: Not Supported 00:11:55.804 Optional Asynchronous Events Supported 00:11:55.804 Namespace Attribute Notices: Supported 00:11:55.804 Firmware Activation Notices: Not Supported 00:11:55.804 ANA Change Notices: Not Supported 00:11:55.804 PLE Aggregate Log Change Notices: Not Supported 00:11:55.804 LBA Status Info Alert Notices: Not Supported 00:11:55.804 EGE Aggregate Log Change Notices: Not Supported 00:11:55.804 Normal NVM Subsystem Shutdown event: Not Supported 00:11:55.804 Zone Descriptor Change Notices: Not Supported 00:11:55.804 Discovery Log Change Notices: Not Supported 00:11:55.804 Controller Attributes 00:11:55.804 128-bit Host Identifier: Supported 00:11:55.804 Non-Operational Permissive Mode: Not Supported 00:11:55.804 NVM Sets: Not Supported 00:11:55.804 Read Recovery Levels: Not Supported 00:11:55.804 Endurance Groups: Not Supported 00:11:55.804 Predictable Latency Mode: Not Supported 00:11:55.804 Traffic Based Keep ALive: Not Supported 00:11:55.804 Namespace Granularity: Not Supported 00:11:55.804 SQ Associations: Not Supported 00:11:55.804 UUID List: Not Supported 00:11:55.804 Multi-Domain Subsystem: Not Supported 00:11:55.804 Fixed Capacity Management: Not Supported 00:11:55.804 Variable Capacity Management: Not Supported 00:11:55.804 Delete Endurance Group: Not Supported 00:11:55.804 Delete NVM Set: Not Supported 00:11:55.804 Extended LBA Formats Supported: Not Supported 00:11:55.804 Flexible Data Placement Supported: Not Supported 00:11:55.804 00:11:55.804 Controller Memory Buffer Support 00:11:55.804 ================================ 00:11:55.804 Supported: No 00:11:55.804 00:11:55.804 Persistent Memory Region Support 00:11:55.804 ================================ 00:11:55.804 Supported: No 00:11:55.804 00:11:55.804 Admin Command Set Attributes 00:11:55.804 ============================ 00:11:55.804 Security Send/Receive: Not Supported 00:11:55.804 Format NVM: Not Supported 00:11:55.804 Firmware Activate/Download: Not Supported 00:11:55.804 Namespace Management: Not Supported 00:11:55.804 Device Self-Test: Not Supported 00:11:55.804 Directives: Not Supported 00:11:55.804 NVMe-MI: Not Supported 00:11:55.804 Virtualization Management: Not Supported 00:11:55.804 Doorbell Buffer Config: Not Supported 00:11:55.804 Get LBA Status Capability: Not Supported 00:11:55.804 Command & Feature Lockdown Capability: Not Supported 00:11:55.804 Abort Command Limit: 4 00:11:55.804 Async Event Request Limit: 4 00:11:55.804 Number of Firmware Slots: N/A 00:11:55.804 Firmware Slot 1 Read-Only: N/A 00:11:55.804 Firmware Activation Without Reset: N/A 00:11:55.804 Multiple Update Detection Support: N/A 00:11:55.804 Firmware Update Granularity: No Information Provided 00:11:55.804 Per-Namespace SMART Log: No 00:11:55.804 Asymmetric Namespace Access Log Page: Not Supported 00:11:55.804 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:55.804 Command Effects Log Page: Supported 00:11:55.804 Get Log Page Extended Data: Supported 00:11:55.804 Telemetry Log Pages: Not Supported 00:11:55.804 Persistent Event Log Pages: Not Supported 00:11:55.805 Supported Log Pages Log Page: May Support 00:11:55.805 Commands Supported & Effects Log Page: Not Supported 00:11:55.805 Feature Identifiers & Effects Log Page:May Support 00:11:55.805 NVMe-MI Commands & Effects Log Page: May Support 00:11:55.805 Data Area 4 for Telemetry Log: Not Supported 00:11:55.805 Error Log Page Entries Supported: 128 00:11:55.805 Keep Alive: Supported 00:11:55.805 Keep Alive Granularity: 10000 ms 00:11:55.805 00:11:55.805 NVM Command Set Attributes 00:11:55.805 ========================== 00:11:55.805 Submission Queue Entry Size 00:11:55.805 Max: 64 00:11:55.805 Min: 64 00:11:55.805 Completion Queue Entry Size 00:11:55.805 Max: 16 00:11:55.805 Min: 16 00:11:55.805 Number of Namespaces: 32 00:11:55.805 Compare Command: Supported 00:11:55.805 Write Uncorrectable Command: Not Supported 00:11:55.805 Dataset Management Command: Supported 00:11:55.805 Write Zeroes Command: Supported 00:11:55.805 Set Features Save Field: Not Supported 00:11:55.805 Reservations: Not Supported 00:11:55.805 Timestamp: Not Supported 00:11:55.805 Copy: Supported 00:11:55.805 Volatile Write Cache: Present 00:11:55.805 Atomic Write Unit (Normal): 1 00:11:55.805 Atomic Write Unit (PFail): 1 00:11:55.805 Atomic Compare & Write Unit: 1 00:11:55.805 Fused Compare & Write: Supported 00:11:55.805 Scatter-Gather List 00:11:55.805 SGL Command Set: Supported (Dword aligned) 00:11:55.805 SGL Keyed: Not Supported 00:11:55.805 SGL Bit Bucket Descriptor: Not Supported 00:11:55.805 SGL Metadata Pointer: Not Supported 00:11:55.805 Oversized SGL: Not Supported 00:11:55.805 SGL Metadata Address: Not Supported 00:11:55.805 SGL Offset: Not Supported 00:11:55.805 Transport SGL Data Block: Not Supported 00:11:55.805 Replay Protected Memory Block: Not Supported 00:11:55.805 00:11:55.805 Firmware Slot Information 00:11:55.805 ========================= 00:11:55.805 Active slot: 1 00:11:55.805 Slot 1 Firmware Revision: 24.09 00:11:55.805 00:11:55.805 00:11:55.805 Commands Supported and Effects 00:11:55.805 ============================== 00:11:55.805 Admin Commands 00:11:55.805 -------------- 00:11:55.805 Get Log Page (02h): Supported 00:11:55.805 Identify (06h): Supported 00:11:55.805 Abort (08h): Supported 00:11:55.805 Set Features (09h): Supported 00:11:55.805 Get Features (0Ah): Supported 00:11:55.805 Asynchronous Event Request (0Ch): Supported 00:11:55.805 Keep Alive (18h): Supported 00:11:55.805 I/O Commands 00:11:55.805 ------------ 00:11:55.805 Flush (00h): Supported LBA-Change 00:11:55.805 Write (01h): Supported LBA-Change 00:11:55.805 Read (02h): Supported 00:11:55.805 Compare (05h): Supported 00:11:55.805 Write Zeroes (08h): Supported LBA-Change 00:11:55.805 Dataset Management (09h): Supported LBA-Change 00:11:55.805 Copy (19h): Supported LBA-Change 00:11:55.805 00:11:55.805 Error Log 00:11:55.805 ========= 00:11:55.805 00:11:55.805 Arbitration 00:11:55.805 =========== 00:11:55.805 Arbitration Burst: 1 00:11:55.805 00:11:55.805 Power Management 00:11:55.805 ================ 00:11:55.805 Number of Power States: 1 00:11:55.805 Current Power State: Power State #0 00:11:55.805 Power State #0: 00:11:55.805 Max Power: 0.00 W 00:11:55.805 Non-Operational State: Operational 00:11:55.805 Entry Latency: Not Reported 00:11:55.805 Exit Latency: Not Reported 00:11:55.805 Relative Read Throughput: 0 00:11:55.805 Relative Read Latency: 0 00:11:55.805 Relative Write Throughput: 0 00:11:55.805 Relative Write Latency: 0 00:11:55.805 Idle Power: Not Reported 00:11:55.805 Active Power: Not Reported 00:11:55.805 Non-Operational Permissive Mode: Not Supported 00:11:55.805 00:11:55.805 Health Information 00:11:55.805 ================== 00:11:55.805 Critical Warnings: 00:11:55.805 Available Spare Space: OK 00:11:55.805 Temperature: OK 00:11:55.805 Device Reliability: OK 00:11:55.805 Read Only: No 00:11:55.805 Volatile Memory Backup: OK 00:11:55.805 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:55.805 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:55.805 Available Spare: 0% 00:11:55.805 Available Sp[2024-07-15 12:41:47.597825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:55.805 [2024-07-15 12:41:47.597839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:55.805 [2024-07-15 12:41:47.597875] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:55.805 [2024-07-15 12:41:47.597887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.805 [2024-07-15 12:41:47.597895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.805 [2024-07-15 12:41:47.597903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.805 [2024-07-15 12:41:47.597911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.805 [2024-07-15 12:41:47.600264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:55.805 [2024-07-15 12:41:47.600279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:55.805 [2024-07-15 12:41:47.600678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.805 [2024-07-15 12:41:47.600757] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:55.805 [2024-07-15 12:41:47.600766] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:55.805 [2024-07-15 12:41:47.601681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:55.805 [2024-07-15 12:41:47.601695] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:55.805 [2024-07-15 12:41:47.601751] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:55.805 [2024-07-15 12:41:47.605264] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:55.805 are Threshold: 0% 00:11:55.805 Life Percentage Used: 0% 00:11:55.805 Data Units Read: 0 00:11:55.805 Data Units Written: 0 00:11:55.805 Host Read Commands: 0 00:11:55.805 Host Write Commands: 0 00:11:55.805 Controller Busy Time: 0 minutes 00:11:55.805 Power Cycles: 0 00:11:55.805 Power On Hours: 0 hours 00:11:55.805 Unsafe Shutdowns: 0 00:11:55.805 Unrecoverable Media Errors: 0 00:11:55.805 Lifetime Error Log Entries: 0 00:11:55.805 Warning Temperature Time: 0 minutes 00:11:55.805 Critical Temperature Time: 0 minutes 00:11:55.805 00:11:55.805 Number of Queues 00:11:55.805 ================ 00:11:55.805 Number of I/O Submission Queues: 127 00:11:55.805 Number of I/O Completion Queues: 127 00:11:55.805 00:11:55.805 Active Namespaces 00:11:55.805 ================= 00:11:55.805 Namespace ID:1 00:11:55.805 Error Recovery Timeout: Unlimited 00:11:55.805 Command Set Identifier: NVM (00h) 00:11:55.805 Deallocate: Supported 00:11:55.805 Deallocated/Unwritten Error: Not Supported 00:11:55.805 Deallocated Read Value: Unknown 00:11:55.805 Deallocate in Write Zeroes: Not Supported 00:11:55.805 Deallocated Guard Field: 0xFFFF 00:11:55.805 Flush: Supported 00:11:55.805 Reservation: Supported 00:11:55.805 Namespace Sharing Capabilities: Multiple Controllers 00:11:55.805 Size (in LBAs): 131072 (0GiB) 00:11:55.805 Capacity (in LBAs): 131072 (0GiB) 00:11:55.805 Utilization (in LBAs): 131072 (0GiB) 00:11:55.805 NGUID: 75040B5B58D149BFB4E889E02C690881 00:11:55.805 UUID: 75040b5b-58d1-49bf-b4e8-89e02c690881 00:11:55.805 Thin Provisioning: Not Supported 00:11:55.805 Per-NS Atomic Units: Yes 00:11:55.805 Atomic Boundary Size (Normal): 0 00:11:55.805 Atomic Boundary Size (PFail): 0 00:11:55.805 Atomic Boundary Offset: 0 00:11:55.805 Maximum Single Source Range Length: 65535 00:11:55.805 Maximum Copy Length: 65535 00:11:55.805 Maximum Source Range Count: 1 00:11:55.805 NGUID/EUI64 Never Reused: No 00:11:55.805 Namespace Write Protected: No 00:11:55.805 Number of LBA Formats: 1 00:11:55.805 Current LBA Format: LBA Format #00 00:11:55.805 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:55.805 00:11:55.805 12:41:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:55.805 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.065 [2024-07-15 12:41:47.866571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.335 Initializing NVMe Controllers 00:12:01.335 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:01.335 Initialization complete. Launching workers. 00:12:01.335 ======================================================== 00:12:01.335 Latency(us) 00:12:01.335 Device Information : IOPS MiB/s Average min max 00:12:01.335 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 18636.75 72.80 6867.52 2674.60 13409.30 00:12:01.335 ======================================================== 00:12:01.335 Total : 18636.75 72.80 6867.52 2674.60 13409.30 00:12:01.335 00:12:01.335 [2024-07-15 12:41:52.888958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.335 12:41:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:01.335 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.335 [2024-07-15 12:41:53.178859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.604 Initializing NVMe Controllers 00:12:06.604 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:06.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:06.604 Initialization complete. Launching workers. 00:12:06.604 ======================================================== 00:12:06.604 Latency(us) 00:12:06.604 Device Information : IOPS MiB/s Average min max 00:12:06.604 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15758.49 61.56 8121.52 7309.77 15057.72 00:12:06.604 ======================================================== 00:12:06.604 Total : 15758.49 61.56 8121.52 7309.77 15057.72 00:12:06.604 00:12:06.604 [2024-07-15 12:41:58.211897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.604 12:41:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:06.604 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.604 [2024-07-15 12:41:58.501802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:11.880 [2024-07-15 12:42:03.592845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:11.880 Initializing NVMe Controllers 00:12:11.880 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:11.880 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:11.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:11.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:11.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:11.880 Initialization complete. Launching workers. 00:12:11.880 Starting thread on core 2 00:12:11.880 Starting thread on core 3 00:12:11.880 Starting thread on core 1 00:12:11.880 12:42:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:11.880 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.139 [2024-07-15 12:42:03.934010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:15.424 [2024-07-15 12:42:06.998977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:15.424 Initializing NVMe Controllers 00:12:15.424 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:15.424 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:15.424 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:15.424 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:15.424 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:15.424 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:15.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:15.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:15.424 Initialization complete. Launching workers. 00:12:15.424 Starting thread on core 1 with urgent priority queue 00:12:15.424 Starting thread on core 2 with urgent priority queue 00:12:15.424 Starting thread on core 3 with urgent priority queue 00:12:15.424 Starting thread on core 0 with urgent priority queue 00:12:15.424 SPDK bdev Controller (SPDK1 ) core 0: 4509.67 IO/s 22.17 secs/100000 ios 00:12:15.424 SPDK bdev Controller (SPDK1 ) core 1: 4969.33 IO/s 20.12 secs/100000 ios 00:12:15.424 SPDK bdev Controller (SPDK1 ) core 2: 7278.00 IO/s 13.74 secs/100000 ios 00:12:15.424 SPDK bdev Controller (SPDK1 ) core 3: 6787.67 IO/s 14.73 secs/100000 ios 00:12:15.424 ======================================================== 00:12:15.424 00:12:15.424 12:42:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.424 [2024-07-15 12:42:07.327067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:15.424 Initializing NVMe Controllers 00:12:15.424 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:15.424 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:15.424 Namespace ID: 1 size: 0GB 00:12:15.424 Initialization complete. 00:12:15.424 INFO: using host memory buffer for IO 00:12:15.424 Hello world! 00:12:15.424 [2024-07-15 12:42:07.360511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:15.682 12:42:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:15.682 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.940 [2024-07-15 12:42:07.685011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.876 Initializing NVMe Controllers 00:12:16.876 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.876 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.876 Initialization complete. Launching workers. 00:12:16.876 submit (in ns) avg, min, max = 11154.3, 4537.3, 4003310.9 00:12:16.876 complete (in ns) avg, min, max = 45389.6, 2727.3, 6992457.3 00:12:16.876 00:12:16.876 Submit histogram 00:12:16.876 ================ 00:12:16.876 Range in us Cumulative Count 00:12:16.876 4.509 - 4.538: 0.0144% ( 1) 00:12:16.876 4.538 - 4.567: 0.3752% ( 25) 00:12:16.876 4.567 - 4.596: 1.3853% ( 70) 00:12:16.877 4.596 - 4.625: 3.4055% ( 140) 00:12:16.877 4.625 - 4.655: 6.6522% ( 225) 00:12:16.877 4.655 - 4.684: 15.3824% ( 605) 00:12:16.877 4.684 - 4.713: 24.6753% ( 644) 00:12:16.877 4.713 - 4.742: 35.8009% ( 771) 00:12:16.877 4.742 - 4.771: 48.2684% ( 864) 00:12:16.877 4.771 - 4.800: 59.2641% ( 762) 00:12:16.877 4.800 - 4.829: 69.6825% ( 722) 00:12:16.877 4.829 - 4.858: 76.7965% ( 493) 00:12:16.877 4.858 - 4.887: 82.1068% ( 368) 00:12:16.877 4.887 - 4.916: 85.7864% ( 255) 00:12:16.877 4.916 - 4.945: 87.9509% ( 150) 00:12:16.877 4.945 - 4.975: 89.5382% ( 110) 00:12:16.877 4.975 - 5.004: 90.9524% ( 98) 00:12:16.877 5.004 - 5.033: 92.6840% ( 120) 00:12:16.877 5.033 - 5.062: 94.5599% ( 130) 00:12:16.877 5.062 - 5.091: 96.2482% ( 117) 00:12:16.877 5.091 - 5.120: 97.5469% ( 90) 00:12:16.877 5.120 - 5.149: 98.2828% ( 51) 00:12:16.877 5.149 - 5.178: 98.9322% ( 45) 00:12:16.877 5.178 - 5.207: 99.2063% ( 19) 00:12:16.877 5.207 - 5.236: 99.3651% ( 11) 00:12:16.877 5.236 - 5.265: 99.4517% ( 6) 00:12:16.877 5.265 - 5.295: 99.4661% ( 1) 00:12:16.877 7.011 - 7.040: 99.4805% ( 1) 00:12:16.877 7.738 - 7.796: 99.4949% ( 1) 00:12:16.877 7.855 - 7.913: 99.5094% ( 1) 00:12:16.877 8.145 - 8.204: 99.5382% ( 2) 00:12:16.877 8.204 - 8.262: 99.5527% ( 1) 00:12:16.877 8.262 - 8.320: 99.5815% ( 2) 00:12:16.877 8.378 - 8.436: 99.5960% ( 1) 00:12:16.877 8.436 - 8.495: 99.6104% ( 1) 00:12:16.877 8.669 - 8.727: 99.6248% ( 1) 00:12:16.877 8.785 - 8.844: 99.6392% ( 1) 00:12:16.877 8.960 - 9.018: 99.6537% ( 1) 00:12:16.877 9.076 - 9.135: 99.6825% ( 2) 00:12:16.877 9.309 - 9.367: 99.6970% ( 1) 00:12:16.877 9.542 - 9.600: 99.7258% ( 2) 00:12:16.877 9.600 - 9.658: 99.7547% ( 2) 00:12:16.877 9.716 - 9.775: 99.7691% ( 1) 00:12:16.877 9.775 - 9.833: 99.7980% ( 2) 00:12:16.877 9.949 - 10.007: 99.8124% ( 1) 00:12:16.877 10.007 - 10.065: 99.8268% ( 1) 00:12:16.877 10.356 - 10.415: 99.8413% ( 1) 00:12:16.877 3991.738 - 4021.527: 100.0000% ( 11) 00:12:16.877 00:12:16.877 Complete histogram 00:12:16.877 ================== 00:12:16.877 Range in us Cumulative Count 00:12:16.877 2.720 - 2.735: 0.2020% ( 14) 00:12:16.877 2.735 - 2.749: 7.2006% ( 485) 00:12:16.877 2.749 - 2.764: 32.1356% ( 1728) 00:12:16.877 2.764 - 2.778: 48.7734% ( 1153) 00:12:16.877 2.778 - 2.793: 53.7518% ( 345) 00:12:16.877 2.793 - 2.807: 56.7677% ( 209) 00:12:16.877 2.807 - 2.822: 65.5411% ( 608) 00:12:16.877 2.822 - 2.836: 82.9004% ( 1203) 00:12:16.877 2.836 - 2.851: 90.8802% ( 553) 00:12:16.877 2.851 - 2.865: 93.9250% ( 211) 00:12:16.877 2.865 - 2.880: 95.7287% ( 125) 00:12:16.877 2.880 - 2.895: 96.4502% ( 50) 00:12:16.877 2.895 - 2.909: 97.0851% ( 44) 00:12:16.877 2.909 - 2.924: 97.7201% ( 44) 00:12:16.877 2.924 - 2.938: 98.1241% ( 28) 00:12:16.877 2.938 - 2.953: 98.3550% ( 16) 00:12:16.877 2.953 - 2.967: 98.5137% ( 11) 00:12:16.877 2.967 - 2.982: 98.6291% ( 8) 00:12:16.877 2.982 - 2.996: 98.6724% ( 3) 00:12:16.877 2.996 - 3.011: 98.7013% ( 2) 00:12:16.877 3.040 - 3.055: 98.7157% ( 1) 00:12:16.877 6.138 - 6.167: 98.7302% ( 1) 00:12:16.877 6.255 - 6.284: 98.7446% ( 1) 00:12:16.877 6.284 - 6.313: 98.7590% ( 1) 00:12:16.877 6.313 - 6.342: 98.7734% ( 1) 00:12:16.877 6.487 - 6.516: 98.7879% ( 1) 00:12:16.877 6.633 - 6.662: 98.8023% ( 1) 00:12:16.877 6.895 - 6.924: 98.8167% ( 1) 00:12:16.877 6.982 - 7.011: 98.8312% ( 1) 00:12:16.877 7.127 - [2024-07-15 12:42:08.713140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.877 7.156: 98.8456% ( 1) 00:12:16.877 7.331 - 7.360: 98.8600% ( 1) 00:12:16.877 7.564 - 7.622: 98.8745% ( 1) 00:12:16.877 7.622 - 7.680: 98.8889% ( 1) 00:12:16.877 7.796 - 7.855: 98.9033% ( 1) 00:12:16.877 7.971 - 8.029: 98.9177% ( 1) 00:12:16.877 8.902 - 8.960: 98.9322% ( 1) 00:12:16.877 1184.116 - 1191.564: 98.9466% ( 1) 00:12:16.877 1802.240 - 1809.687: 98.9610% ( 1) 00:12:16.877 3410.851 - 3425.745: 98.9755% ( 1) 00:12:16.877 3991.738 - 4021.527: 99.9711% ( 69) 00:12:16.877 5987.607 - 6017.396: 99.9856% ( 1) 00:12:16.877 6970.647 - 7000.436: 100.0000% ( 1) 00:12:16.877 00:12:16.877 12:42:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:16.877 12:42:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:16.877 12:42:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:16.877 12:42:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:16.877 12:42:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:17.135 [ 00:12:17.135 { 00:12:17.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:17.135 "subtype": "Discovery", 00:12:17.135 "listen_addresses": [], 00:12:17.135 "allow_any_host": true, 00:12:17.135 "hosts": [] 00:12:17.135 }, 00:12:17.135 { 00:12:17.135 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:17.135 "subtype": "NVMe", 00:12:17.135 "listen_addresses": [ 00:12:17.135 { 00:12:17.135 "trtype": "VFIOUSER", 00:12:17.135 "adrfam": "IPv4", 00:12:17.135 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:17.135 "trsvcid": "0" 00:12:17.135 } 00:12:17.135 ], 00:12:17.135 "allow_any_host": true, 00:12:17.135 "hosts": [], 00:12:17.135 "serial_number": "SPDK1", 00:12:17.135 "model_number": "SPDK bdev Controller", 00:12:17.135 "max_namespaces": 32, 00:12:17.135 "min_cntlid": 1, 00:12:17.135 "max_cntlid": 65519, 00:12:17.135 "namespaces": [ 00:12:17.135 { 00:12:17.135 "nsid": 1, 00:12:17.135 "bdev_name": "Malloc1", 00:12:17.135 "name": "Malloc1", 00:12:17.135 "nguid": "75040B5B58D149BFB4E889E02C690881", 00:12:17.135 "uuid": "75040b5b-58d1-49bf-b4e8-89e02c690881" 00:12:17.135 } 00:12:17.135 ] 00:12:17.135 }, 00:12:17.135 { 00:12:17.135 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:17.135 "subtype": "NVMe", 00:12:17.135 "listen_addresses": [ 00:12:17.135 { 00:12:17.135 "trtype": "VFIOUSER", 00:12:17.135 "adrfam": "IPv4", 00:12:17.135 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:17.135 "trsvcid": "0" 00:12:17.135 } 00:12:17.135 ], 00:12:17.135 "allow_any_host": true, 00:12:17.135 "hosts": [], 00:12:17.135 "serial_number": "SPDK2", 00:12:17.135 "model_number": "SPDK bdev Controller", 00:12:17.135 "max_namespaces": 32, 00:12:17.135 "min_cntlid": 1, 00:12:17.135 "max_cntlid": 65519, 00:12:17.135 "namespaces": [ 00:12:17.135 { 00:12:17.135 "nsid": 1, 00:12:17.135 "bdev_name": "Malloc2", 00:12:17.135 "name": "Malloc2", 00:12:17.135 "nguid": "65997B459C834B08858A42EBB593276D", 00:12:17.135 "uuid": "65997b45-9c83-4b08-858a-42ebb593276d" 00:12:17.135 } 00:12:17.135 ] 00:12:17.135 } 00:12:17.135 ] 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3850384 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:17.135 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:17.393 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.393 [2024-07-15 12:42:09.221097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.393 Malloc3 00:12:17.393 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:17.651 [2024-07-15 12:42:09.557436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.651 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:17.910 Asynchronous Event Request test 00:12:17.910 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:17.910 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:17.910 Registering asynchronous event callbacks... 00:12:17.910 Starting namespace attribute notice tests for all controllers... 00:12:17.910 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:17.910 aer_cb - Changed Namespace 00:12:17.910 Cleaning up... 00:12:17.910 [ 00:12:17.910 { 00:12:17.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:17.910 "subtype": "Discovery", 00:12:17.910 "listen_addresses": [], 00:12:17.910 "allow_any_host": true, 00:12:17.910 "hosts": [] 00:12:17.910 }, 00:12:17.910 { 00:12:17.910 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:17.910 "subtype": "NVMe", 00:12:17.910 "listen_addresses": [ 00:12:17.910 { 00:12:17.910 "trtype": "VFIOUSER", 00:12:17.910 "adrfam": "IPv4", 00:12:17.910 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:17.910 "trsvcid": "0" 00:12:17.910 } 00:12:17.910 ], 00:12:17.910 "allow_any_host": true, 00:12:17.910 "hosts": [], 00:12:17.910 "serial_number": "SPDK1", 00:12:17.910 "model_number": "SPDK bdev Controller", 00:12:17.910 "max_namespaces": 32, 00:12:17.910 "min_cntlid": 1, 00:12:17.910 "max_cntlid": 65519, 00:12:17.910 "namespaces": [ 00:12:17.910 { 00:12:17.910 "nsid": 1, 00:12:17.910 "bdev_name": "Malloc1", 00:12:17.910 "name": "Malloc1", 00:12:17.910 "nguid": "75040B5B58D149BFB4E889E02C690881", 00:12:17.910 "uuid": "75040b5b-58d1-49bf-b4e8-89e02c690881" 00:12:17.910 }, 00:12:17.910 { 00:12:17.910 "nsid": 2, 00:12:17.910 "bdev_name": "Malloc3", 00:12:17.910 "name": "Malloc3", 00:12:17.910 "nguid": "06D06762A18A498EBC804E60B4AF889F", 00:12:17.910 "uuid": "06d06762-a18a-498e-bc80-4e60b4af889f" 00:12:17.910 } 00:12:17.910 ] 00:12:17.910 }, 00:12:17.910 { 00:12:17.910 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:17.910 "subtype": "NVMe", 00:12:17.910 "listen_addresses": [ 00:12:17.910 { 00:12:17.910 "trtype": "VFIOUSER", 00:12:17.910 "adrfam": "IPv4", 00:12:17.910 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:17.910 "trsvcid": "0" 00:12:17.910 } 00:12:17.910 ], 00:12:17.910 "allow_any_host": true, 00:12:17.910 "hosts": [], 00:12:17.910 "serial_number": "SPDK2", 00:12:17.910 "model_number": "SPDK bdev Controller", 00:12:17.910 "max_namespaces": 32, 00:12:17.910 "min_cntlid": 1, 00:12:17.910 "max_cntlid": 65519, 00:12:17.910 "namespaces": [ 00:12:17.910 { 00:12:17.910 "nsid": 1, 00:12:17.910 "bdev_name": "Malloc2", 00:12:17.910 "name": "Malloc2", 00:12:17.910 "nguid": "65997B459C834B08858A42EBB593276D", 00:12:17.910 "uuid": "65997b45-9c83-4b08-858a-42ebb593276d" 00:12:17.910 } 00:12:17.910 ] 00:12:17.910 } 00:12:17.910 ] 00:12:17.910 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3850384 00:12:17.910 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:17.910 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:17.910 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:17.910 12:42:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:18.171 [2024-07-15 12:42:09.870383] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:12:18.171 [2024-07-15 12:42:09.870419] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850589 ] 00:12:18.171 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.171 [2024-07-15 12:42:09.907005] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:18.171 [2024-07-15 12:42:09.909315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.171 [2024-07-15 12:42:09.909341] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa50fdbb000 00:12:18.171 [2024-07-15 12:42:09.910322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.911337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.912336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.913356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.914374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.915386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.916396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.917404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.171 [2024-07-15 12:42:09.918423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.171 [2024-07-15 12:42:09.918436] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa50fdb0000 00:12:18.171 [2024-07-15 12:42:09.919844] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.171 [2024-07-15 12:42:09.939609] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:18.171 [2024-07-15 12:42:09.939639] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:18.171 [2024-07-15 12:42:09.941724] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:18.171 [2024-07-15 12:42:09.941775] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:18.171 [2024-07-15 12:42:09.941866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:18.171 [2024-07-15 12:42:09.941887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:18.171 [2024-07-15 12:42:09.941897] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:18.171 [2024-07-15 12:42:09.942733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:18.172 [2024-07-15 12:42:09.942746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:18.172 [2024-07-15 12:42:09.942756] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:18.172 [2024-07-15 12:42:09.943750] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:18.172 [2024-07-15 12:42:09.943763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:18.172 [2024-07-15 12:42:09.943772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.944761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:18.172 [2024-07-15 12:42:09.944774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.945771] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:18.172 [2024-07-15 12:42:09.945783] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:18.172 [2024-07-15 12:42:09.945789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.945798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.945905] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:18.172 [2024-07-15 12:42:09.945911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.945917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:18.172 [2024-07-15 12:42:09.950262] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:18.172 [2024-07-15 12:42:09.950813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:18.172 [2024-07-15 12:42:09.951822] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:18.172 [2024-07-15 12:42:09.952828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.172 [2024-07-15 12:42:09.952877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:18.172 [2024-07-15 12:42:09.953839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:18.172 [2024-07-15 12:42:09.953852] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:18.172 [2024-07-15 12:42:09.953858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.953883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:18.172 [2024-07-15 12:42:09.953900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.953915] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.172 [2024-07-15 12:42:09.953922] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.172 [2024-07-15 12:42:09.953936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:09.961264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:09.961280] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:18.172 [2024-07-15 12:42:09.961289] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:18.172 [2024-07-15 12:42:09.961295] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:18.172 [2024-07-15 12:42:09.961301] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:18.172 [2024-07-15 12:42:09.961307] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:18.172 [2024-07-15 12:42:09.961313] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:18.172 [2024-07-15 12:42:09.961319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.961329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.961342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:09.969264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:09.969284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.172 [2024-07-15 12:42:09.969295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.172 [2024-07-15 12:42:09.969306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.172 [2024-07-15 12:42:09.969316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.172 [2024-07-15 12:42:09.969322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.969333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.969344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:09.977263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:09.977273] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:18.172 [2024-07-15 12:42:09.977280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.977292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.977299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.977311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:09.985263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:09.985339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.985351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.985360] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:18.172 [2024-07-15 12:42:09.985366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:18.172 [2024-07-15 12:42:09.985375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:09.993267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:09.993281] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:18.172 [2024-07-15 12:42:09.993293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.993302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:09.993312] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.172 [2024-07-15 12:42:09.993317] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.172 [2024-07-15 12:42:09.993325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:10.001279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:10.001310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.001323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.001335] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.172 [2024-07-15 12:42:10.001342] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.172 [2024-07-15 12:42:10.001351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:10.009266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:10.009282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009336] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:18.172 [2024-07-15 12:42:10.009342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:18.172 [2024-07-15 12:42:10.009349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:18.172 [2024-07-15 12:42:10.009369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:10.017268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:18.172 [2024-07-15 12:42:10.017287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:18.172 [2024-07-15 12:42:10.025264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.025282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:18.173 [2024-07-15 12:42:10.033262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.033279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.173 [2024-07-15 12:42:10.041261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.041286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:18.173 [2024-07-15 12:42:10.041293] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:18.173 [2024-07-15 12:42:10.041298] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:18.173 [2024-07-15 12:42:10.041303] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:18.173 [2024-07-15 12:42:10.041311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:18.173 [2024-07-15 12:42:10.041321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:18.173 [2024-07-15 12:42:10.041327] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:18.173 [2024-07-15 12:42:10.041335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:18.173 [2024-07-15 12:42:10.041344] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:18.173 [2024-07-15 12:42:10.041350] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.173 [2024-07-15 12:42:10.041357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.173 [2024-07-15 12:42:10.041367] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:18.173 [2024-07-15 12:42:10.041376] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:18.173 [2024-07-15 12:42:10.041384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:18.173 [2024-07-15 12:42:10.049263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.049284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.049298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:18.173 [2024-07-15 12:42:10.049307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:18.173 ===================================================== 00:12:18.173 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.173 ===================================================== 00:12:18.173 Controller Capabilities/Features 00:12:18.173 ================================ 00:12:18.173 Vendor ID: 4e58 00:12:18.173 Subsystem Vendor ID: 4e58 00:12:18.173 Serial Number: SPDK2 00:12:18.173 Model Number: SPDK bdev Controller 00:12:18.173 Firmware Version: 24.09 00:12:18.173 Recommended Arb Burst: 6 00:12:18.173 IEEE OUI Identifier: 8d 6b 50 00:12:18.173 Multi-path I/O 00:12:18.173 May have multiple subsystem ports: Yes 00:12:18.173 May have multiple controllers: Yes 00:12:18.173 Associated with SR-IOV VF: No 00:12:18.173 Max Data Transfer Size: 131072 00:12:18.173 Max Number of Namespaces: 32 00:12:18.173 Max Number of I/O Queues: 127 00:12:18.173 NVMe Specification Version (VS): 1.3 00:12:18.173 NVMe Specification Version (Identify): 1.3 00:12:18.173 Maximum Queue Entries: 256 00:12:18.173 Contiguous Queues Required: Yes 00:12:18.173 Arbitration Mechanisms Supported 00:12:18.173 Weighted Round Robin: Not Supported 00:12:18.173 Vendor Specific: Not Supported 00:12:18.173 Reset Timeout: 15000 ms 00:12:18.173 Doorbell Stride: 4 bytes 00:12:18.173 NVM Subsystem Reset: Not Supported 00:12:18.173 Command Sets Supported 00:12:18.173 NVM Command Set: Supported 00:12:18.173 Boot Partition: Not Supported 00:12:18.173 Memory Page Size Minimum: 4096 bytes 00:12:18.173 Memory Page Size Maximum: 4096 bytes 00:12:18.173 Persistent Memory Region: Not Supported 00:12:18.173 Optional Asynchronous Events Supported 00:12:18.173 Namespace Attribute Notices: Supported 00:12:18.173 Firmware Activation Notices: Not Supported 00:12:18.173 ANA Change Notices: Not Supported 00:12:18.173 PLE Aggregate Log Change Notices: Not Supported 00:12:18.173 LBA Status Info Alert Notices: Not Supported 00:12:18.173 EGE Aggregate Log Change Notices: Not Supported 00:12:18.173 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.173 Zone Descriptor Change Notices: Not Supported 00:12:18.173 Discovery Log Change Notices: Not Supported 00:12:18.173 Controller Attributes 00:12:18.173 128-bit Host Identifier: Supported 00:12:18.173 Non-Operational Permissive Mode: Not Supported 00:12:18.173 NVM Sets: Not Supported 00:12:18.173 Read Recovery Levels: Not Supported 00:12:18.173 Endurance Groups: Not Supported 00:12:18.173 Predictable Latency Mode: Not Supported 00:12:18.173 Traffic Based Keep ALive: Not Supported 00:12:18.173 Namespace Granularity: Not Supported 00:12:18.173 SQ Associations: Not Supported 00:12:18.173 UUID List: Not Supported 00:12:18.173 Multi-Domain Subsystem: Not Supported 00:12:18.173 Fixed Capacity Management: Not Supported 00:12:18.173 Variable Capacity Management: Not Supported 00:12:18.173 Delete Endurance Group: Not Supported 00:12:18.173 Delete NVM Set: Not Supported 00:12:18.173 Extended LBA Formats Supported: Not Supported 00:12:18.173 Flexible Data Placement Supported: Not Supported 00:12:18.173 00:12:18.173 Controller Memory Buffer Support 00:12:18.173 ================================ 00:12:18.173 Supported: No 00:12:18.173 00:12:18.173 Persistent Memory Region Support 00:12:18.173 ================================ 00:12:18.173 Supported: No 00:12:18.173 00:12:18.173 Admin Command Set Attributes 00:12:18.173 ============================ 00:12:18.173 Security Send/Receive: Not Supported 00:12:18.173 Format NVM: Not Supported 00:12:18.173 Firmware Activate/Download: Not Supported 00:12:18.173 Namespace Management: Not Supported 00:12:18.173 Device Self-Test: Not Supported 00:12:18.173 Directives: Not Supported 00:12:18.173 NVMe-MI: Not Supported 00:12:18.173 Virtualization Management: Not Supported 00:12:18.173 Doorbell Buffer Config: Not Supported 00:12:18.173 Get LBA Status Capability: Not Supported 00:12:18.173 Command & Feature Lockdown Capability: Not Supported 00:12:18.173 Abort Command Limit: 4 00:12:18.173 Async Event Request Limit: 4 00:12:18.173 Number of Firmware Slots: N/A 00:12:18.173 Firmware Slot 1 Read-Only: N/A 00:12:18.173 Firmware Activation Without Reset: N/A 00:12:18.173 Multiple Update Detection Support: N/A 00:12:18.173 Firmware Update Granularity: No Information Provided 00:12:18.173 Per-Namespace SMART Log: No 00:12:18.173 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.173 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:18.173 Command Effects Log Page: Supported 00:12:18.173 Get Log Page Extended Data: Supported 00:12:18.173 Telemetry Log Pages: Not Supported 00:12:18.173 Persistent Event Log Pages: Not Supported 00:12:18.173 Supported Log Pages Log Page: May Support 00:12:18.173 Commands Supported & Effects Log Page: Not Supported 00:12:18.173 Feature Identifiers & Effects Log Page:May Support 00:12:18.173 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.173 Data Area 4 for Telemetry Log: Not Supported 00:12:18.173 Error Log Page Entries Supported: 128 00:12:18.173 Keep Alive: Supported 00:12:18.173 Keep Alive Granularity: 10000 ms 00:12:18.173 00:12:18.173 NVM Command Set Attributes 00:12:18.173 ========================== 00:12:18.173 Submission Queue Entry Size 00:12:18.173 Max: 64 00:12:18.173 Min: 64 00:12:18.173 Completion Queue Entry Size 00:12:18.173 Max: 16 00:12:18.173 Min: 16 00:12:18.173 Number of Namespaces: 32 00:12:18.173 Compare Command: Supported 00:12:18.173 Write Uncorrectable Command: Not Supported 00:12:18.173 Dataset Management Command: Supported 00:12:18.173 Write Zeroes Command: Supported 00:12:18.173 Set Features Save Field: Not Supported 00:12:18.173 Reservations: Not Supported 00:12:18.173 Timestamp: Not Supported 00:12:18.173 Copy: Supported 00:12:18.173 Volatile Write Cache: Present 00:12:18.173 Atomic Write Unit (Normal): 1 00:12:18.173 Atomic Write Unit (PFail): 1 00:12:18.173 Atomic Compare & Write Unit: 1 00:12:18.173 Fused Compare & Write: Supported 00:12:18.173 Scatter-Gather List 00:12:18.173 SGL Command Set: Supported (Dword aligned) 00:12:18.173 SGL Keyed: Not Supported 00:12:18.173 SGL Bit Bucket Descriptor: Not Supported 00:12:18.173 SGL Metadata Pointer: Not Supported 00:12:18.173 Oversized SGL: Not Supported 00:12:18.173 SGL Metadata Address: Not Supported 00:12:18.173 SGL Offset: Not Supported 00:12:18.173 Transport SGL Data Block: Not Supported 00:12:18.173 Replay Protected Memory Block: Not Supported 00:12:18.173 00:12:18.173 Firmware Slot Information 00:12:18.173 ========================= 00:12:18.173 Active slot: 1 00:12:18.173 Slot 1 Firmware Revision: 24.09 00:12:18.173 00:12:18.173 00:12:18.173 Commands Supported and Effects 00:12:18.173 ============================== 00:12:18.173 Admin Commands 00:12:18.173 -------------- 00:12:18.173 Get Log Page (02h): Supported 00:12:18.173 Identify (06h): Supported 00:12:18.173 Abort (08h): Supported 00:12:18.173 Set Features (09h): Supported 00:12:18.173 Get Features (0Ah): Supported 00:12:18.173 Asynchronous Event Request (0Ch): Supported 00:12:18.173 Keep Alive (18h): Supported 00:12:18.173 I/O Commands 00:12:18.174 ------------ 00:12:18.174 Flush (00h): Supported LBA-Change 00:12:18.174 Write (01h): Supported LBA-Change 00:12:18.174 Read (02h): Supported 00:12:18.174 Compare (05h): Supported 00:12:18.174 Write Zeroes (08h): Supported LBA-Change 00:12:18.174 Dataset Management (09h): Supported LBA-Change 00:12:18.174 Copy (19h): Supported LBA-Change 00:12:18.174 00:12:18.174 Error Log 00:12:18.174 ========= 00:12:18.174 00:12:18.174 Arbitration 00:12:18.174 =========== 00:12:18.174 Arbitration Burst: 1 00:12:18.174 00:12:18.174 Power Management 00:12:18.174 ================ 00:12:18.174 Number of Power States: 1 00:12:18.174 Current Power State: Power State #0 00:12:18.174 Power State #0: 00:12:18.174 Max Power: 0.00 W 00:12:18.174 Non-Operational State: Operational 00:12:18.174 Entry Latency: Not Reported 00:12:18.174 Exit Latency: Not Reported 00:12:18.174 Relative Read Throughput: 0 00:12:18.174 Relative Read Latency: 0 00:12:18.174 Relative Write Throughput: 0 00:12:18.174 Relative Write Latency: 0 00:12:18.174 Idle Power: Not Reported 00:12:18.174 Active Power: Not Reported 00:12:18.174 Non-Operational Permissive Mode: Not Supported 00:12:18.174 00:12:18.174 Health Information 00:12:18.174 ================== 00:12:18.174 Critical Warnings: 00:12:18.174 Available Spare Space: OK 00:12:18.174 Temperature: OK 00:12:18.174 Device Reliability: OK 00:12:18.174 Read Only: No 00:12:18.174 Volatile Memory Backup: OK 00:12:18.174 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:18.174 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:18.174 Available Spare: 0% 00:12:18.174 Available Sp[2024-07-15 12:42:10.049427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:18.174 [2024-07-15 12:42:10.057263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:18.174 [2024-07-15 12:42:10.057306] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:18.174 [2024-07-15 12:42:10.057319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.174 [2024-07-15 12:42:10.057328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.174 [2024-07-15 12:42:10.057335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.174 [2024-07-15 12:42:10.057344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.174 [2024-07-15 12:42:10.057415] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:18.174 [2024-07-15 12:42:10.057430] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:18.174 [2024-07-15 12:42:10.058420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.174 [2024-07-15 12:42:10.058480] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:18.174 [2024-07-15 12:42:10.058489] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:18.174 [2024-07-15 12:42:10.059424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:18.174 [2024-07-15 12:42:10.059440] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:18.174 [2024-07-15 12:42:10.059498] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:18.174 [2024-07-15 12:42:10.060960] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.433 are Threshold: 0% 00:12:18.433 Life Percentage Used: 0% 00:12:18.433 Data Units Read: 0 00:12:18.433 Data Units Written: 0 00:12:18.433 Host Read Commands: 0 00:12:18.433 Host Write Commands: 0 00:12:18.433 Controller Busy Time: 0 minutes 00:12:18.433 Power Cycles: 0 00:12:18.433 Power On Hours: 0 hours 00:12:18.433 Unsafe Shutdowns: 0 00:12:18.433 Unrecoverable Media Errors: 0 00:12:18.433 Lifetime Error Log Entries: 0 00:12:18.433 Warning Temperature Time: 0 minutes 00:12:18.433 Critical Temperature Time: 0 minutes 00:12:18.433 00:12:18.433 Number of Queues 00:12:18.433 ================ 00:12:18.433 Number of I/O Submission Queues: 127 00:12:18.433 Number of I/O Completion Queues: 127 00:12:18.433 00:12:18.433 Active Namespaces 00:12:18.433 ================= 00:12:18.433 Namespace ID:1 00:12:18.433 Error Recovery Timeout: Unlimited 00:12:18.433 Command Set Identifier: NVM (00h) 00:12:18.433 Deallocate: Supported 00:12:18.433 Deallocated/Unwritten Error: Not Supported 00:12:18.433 Deallocated Read Value: Unknown 00:12:18.433 Deallocate in Write Zeroes: Not Supported 00:12:18.433 Deallocated Guard Field: 0xFFFF 00:12:18.433 Flush: Supported 00:12:18.433 Reservation: Supported 00:12:18.433 Namespace Sharing Capabilities: Multiple Controllers 00:12:18.433 Size (in LBAs): 131072 (0GiB) 00:12:18.433 Capacity (in LBAs): 131072 (0GiB) 00:12:18.433 Utilization (in LBAs): 131072 (0GiB) 00:12:18.433 NGUID: 65997B459C834B08858A42EBB593276D 00:12:18.433 UUID: 65997b45-9c83-4b08-858a-42ebb593276d 00:12:18.433 Thin Provisioning: Not Supported 00:12:18.433 Per-NS Atomic Units: Yes 00:12:18.433 Atomic Boundary Size (Normal): 0 00:12:18.433 Atomic Boundary Size (PFail): 0 00:12:18.433 Atomic Boundary Offset: 0 00:12:18.433 Maximum Single Source Range Length: 65535 00:12:18.433 Maximum Copy Length: 65535 00:12:18.433 Maximum Source Range Count: 1 00:12:18.433 NGUID/EUI64 Never Reused: No 00:12:18.433 Namespace Write Protected: No 00:12:18.433 Number of LBA Formats: 1 00:12:18.433 Current LBA Format: LBA Format #00 00:12:18.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.433 00:12:18.433 12:42:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:18.433 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.433 [2024-07-15 12:42:10.320958] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.707 Initializing NVMe Controllers 00:12:23.707 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.707 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:23.707 Initialization complete. Launching workers. 00:12:23.707 ======================================================== 00:12:23.707 Latency(us) 00:12:23.707 Device Information : IOPS MiB/s Average min max 00:12:23.707 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 18592.66 72.63 6885.14 2667.31 14562.54 00:12:23.707 ======================================================== 00:12:23.707 Total : 18592.66 72.63 6885.14 2667.31 14562.54 00:12:23.707 00:12:23.707 [2024-07-15 12:42:15.424571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.707 12:42:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:23.707 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.967 [2024-07-15 12:42:15.710724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.240 Initializing NVMe Controllers 00:12:29.240 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:29.240 Initialization complete. Launching workers. 00:12:29.240 ======================================================== 00:12:29.240 Latency(us) 00:12:29.240 Device Information : IOPS MiB/s Average min max 00:12:29.240 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24089.60 94.10 5314.03 1552.28 8421.72 00:12:29.240 ======================================================== 00:12:29.240 Total : 24089.60 94.10 5314.03 1552.28 8421.72 00:12:29.240 00:12:29.240 [2024-07-15 12:42:20.734195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.240 12:42:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:29.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.240 [2024-07-15 12:42:21.018343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.509 [2024-07-15 12:42:26.160397] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.509 Initializing NVMe Controllers 00:12:34.509 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.509 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.510 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:34.510 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:34.510 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:34.510 Initialization complete. Launching workers. 00:12:34.510 Starting thread on core 2 00:12:34.510 Starting thread on core 3 00:12:34.510 Starting thread on core 1 00:12:34.510 12:42:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:34.510 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.767 [2024-07-15 12:42:26.504735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.062 [2024-07-15 12:42:29.632761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.062 Initializing NVMe Controllers 00:12:38.062 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.062 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.062 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:38.062 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:38.062 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:38.062 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:38.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:38.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:38.062 Initialization complete. Launching workers. 00:12:38.062 Starting thread on core 1 with urgent priority queue 00:12:38.062 Starting thread on core 2 with urgent priority queue 00:12:38.062 Starting thread on core 3 with urgent priority queue 00:12:38.062 Starting thread on core 0 with urgent priority queue 00:12:38.062 SPDK bdev Controller (SPDK2 ) core 0: 2431.33 IO/s 41.13 secs/100000 ios 00:12:38.062 SPDK bdev Controller (SPDK2 ) core 1: 1507.00 IO/s 66.36 secs/100000 ios 00:12:38.062 SPDK bdev Controller (SPDK2 ) core 2: 1787.67 IO/s 55.94 secs/100000 ios 00:12:38.062 SPDK bdev Controller (SPDK2 ) core 3: 2482.00 IO/s 40.29 secs/100000 ios 00:12:38.062 ======================================================== 00:12:38.062 00:12:38.062 12:42:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:38.062 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.062 [2024-07-15 12:42:29.951034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.062 Initializing NVMe Controllers 00:12:38.062 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.062 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.062 Namespace ID: 1 size: 0GB 00:12:38.062 Initialization complete. 00:12:38.062 INFO: using host memory buffer for IO 00:12:38.062 Hello world! 00:12:38.062 [2024-07-15 12:42:29.960322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.321 12:42:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:38.321 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.579 [2024-07-15 12:42:30.276096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.513 Initializing NVMe Controllers 00:12:39.513 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:39.513 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:39.513 Initialization complete. Launching workers. 00:12:39.513 submit (in ns) avg, min, max = 9527.9, 4561.8, 4004473.6 00:12:39.513 complete (in ns) avg, min, max = 35963.6, 2712.7, 4059115.5 00:12:39.513 00:12:39.513 Submit histogram 00:12:39.513 ================ 00:12:39.513 Range in us Cumulative Count 00:12:39.513 4.538 - 4.567: 0.0319% ( 3) 00:12:39.513 4.567 - 4.596: 0.5635% ( 50) 00:12:39.513 4.596 - 4.625: 1.5417% ( 92) 00:12:39.513 4.625 - 4.655: 4.1042% ( 241) 00:12:39.513 4.655 - 4.684: 8.8464% ( 446) 00:12:39.513 4.684 - 4.713: 15.7895% ( 653) 00:12:39.513 4.713 - 4.742: 25.7735% ( 939) 00:12:39.513 4.742 - 4.771: 39.5534% ( 1296) 00:12:39.513 4.771 - 4.800: 50.2499% ( 1006) 00:12:39.513 4.800 - 4.829: 60.6273% ( 976) 00:12:39.513 4.829 - 4.858: 70.4519% ( 924) 00:12:39.513 4.858 - 4.887: 78.1818% ( 727) 00:12:39.513 4.887 - 4.916: 83.5513% ( 505) 00:12:39.513 4.916 - 4.945: 85.9862% ( 229) 00:12:39.513 4.945 - 4.975: 87.4216% ( 135) 00:12:39.513 4.975 - 5.004: 88.8357% ( 133) 00:12:39.513 5.004 - 5.033: 90.7496% ( 180) 00:12:39.513 5.033 - 5.062: 92.6741% ( 181) 00:12:39.513 5.062 - 5.091: 94.6624% ( 187) 00:12:39.513 5.091 - 5.120: 96.2573% ( 150) 00:12:39.513 5.120 - 5.149: 97.6396% ( 130) 00:12:39.513 5.149 - 5.178: 98.4583% ( 77) 00:12:39.513 5.178 - 5.207: 98.9686% ( 48) 00:12:39.513 5.207 - 5.236: 99.2238% ( 24) 00:12:39.514 5.236 - 5.265: 99.3514% ( 12) 00:12:39.514 5.265 - 5.295: 99.3939% ( 4) 00:12:39.514 5.295 - 5.324: 99.4471% ( 5) 00:12:39.514 5.353 - 5.382: 99.4790% ( 3) 00:12:39.514 5.411 - 5.440: 99.4896% ( 1) 00:12:39.514 5.469 - 5.498: 99.5003% ( 1) 00:12:39.514 5.818 - 5.847: 99.5109% ( 1) 00:12:39.514 6.865 - 6.895: 99.5215% ( 1) 00:12:39.514 7.447 - 7.505: 99.5322% ( 1) 00:12:39.514 7.564 - 7.622: 99.5428% ( 1) 00:12:39.514 7.622 - 7.680: 99.5534% ( 1) 00:12:39.514 7.796 - 7.855: 99.5641% ( 1) 00:12:39.514 7.913 - 7.971: 99.5853% ( 2) 00:12:39.514 8.029 - 8.087: 99.6066% ( 2) 00:12:39.514 8.087 - 8.145: 99.6172% ( 1) 00:12:39.514 8.262 - 8.320: 99.6279% ( 1) 00:12:39.514 8.320 - 8.378: 99.6385% ( 1) 00:12:39.514 8.495 - 8.553: 99.6491% ( 1) 00:12:39.514 8.669 - 8.727: 99.6704% ( 2) 00:12:39.514 8.727 - 8.785: 99.6917% ( 2) 00:12:39.514 8.844 - 8.902: 99.7023% ( 1) 00:12:39.514 8.902 - 8.960: 99.7129% ( 1) 00:12:39.514 8.960 - 9.018: 99.7236% ( 1) 00:12:39.514 9.018 - 9.076: 99.7342% ( 1) 00:12:39.514 9.193 - 9.251: 99.7661% ( 3) 00:12:39.514 9.251 - 9.309: 99.7873% ( 2) 00:12:39.514 9.309 - 9.367: 99.8192% ( 3) 00:12:39.514 9.425 - 9.484: 99.8299% ( 1) 00:12:39.514 9.484 - 9.542: 99.8405% ( 1) 00:12:39.514 9.600 - 9.658: 99.8511% ( 1) 00:12:39.514 9.716 - 9.775: 99.8618% ( 1) 00:12:39.514 9.775 - 9.833: 99.8724% ( 1) 00:12:39.514 181.527 - 182.458: 99.8830% ( 1) 00:12:39.514 3991.738 - 4021.527: 100.0000% ( 11) 00:12:39.514 00:12:39.514 Complete histogram 00:12:39.514 ================== 00:12:39.514 Range in us Cumulative Count 00:12:39.514 2.705 - 2.720: 0.0744% ( 7) 00:12:39.514 2.720 - 2.735: 0.7018% ( 59) 00:12:39.514 2.735 - 2.749: 2.3073% ( 151) 00:12:39.514 2.749 - 2.764: 4.0829% ( 167) 00:12:39.514 2.764 - 2.778: 8.9527% ( 458) 00:12:39.514 2.778 - 2.793: 38.3094% ( 2761) 00:12:39.514 2.793 - 2.807: 75.6087% ( 3508) 00:12:39.514 2.807 - 2.822: 87.6449% ( 1132) 00:12:39.514 2.822 - 2.836: 91.0048% ( 316) 00:12:39.514 2.836 - 2.851: 93.1632% ( 203) 00:12:39.514 2.851 - 2.865: 94.2265% ( 100) 00:12:39.514 2.865 - 2.880: 95.5768% ( 127) 00:12:39.514 2.880 - 2.895: 97.1186% ( 145) 00:12:39.514 2.895 - 2.909: 98.1499% ( 97) 00:12:39.514 2.909 - 2.924: 98.4689% ( 30) 00:12:39.514 2.924 - 2.938: 98.6178% ( 14) 00:12:39.514 2.938 - 2.953: 98.6816% ( 6) 00:12:39.514 2.953 - 2.967: 98.7453% ( 6) 00:12:39.514 2.967 - 2.982: 98.8091% ( 6) 00:12:39.514 2.982 - [2024-07-15 12:42:31.378706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.514 2.996: 98.8198% ( 1) 00:12:39.514 2.996 - 3.011: 98.8304% ( 1) 00:12:39.514 3.011 - 3.025: 98.8410% ( 1) 00:12:39.514 3.025 - 3.040: 98.8517% ( 1) 00:12:39.514 3.040 - 3.055: 98.9048% ( 5) 00:12:39.514 3.055 - 3.069: 98.9155% ( 1) 00:12:39.514 3.069 - 3.084: 98.9367% ( 2) 00:12:39.514 3.084 - 3.098: 98.9474% ( 1) 00:12:39.514 5.149 - 5.178: 98.9580% ( 1) 00:12:39.514 5.731 - 5.760: 98.9686% ( 1) 00:12:39.514 5.935 - 5.964: 98.9899% ( 2) 00:12:39.514 5.964 - 5.993: 99.0005% ( 1) 00:12:39.514 6.080 - 6.109: 99.0112% ( 1) 00:12:39.514 6.225 - 6.255: 99.0218% ( 1) 00:12:39.514 6.255 - 6.284: 99.0324% ( 1) 00:12:39.514 6.720 - 6.749: 99.0431% ( 1) 00:12:39.514 6.865 - 6.895: 99.0537% ( 1) 00:12:39.514 7.040 - 7.069: 99.0643% ( 1) 00:12:39.514 7.098 - 7.127: 99.0750% ( 1) 00:12:39.514 7.127 - 7.156: 99.0856% ( 1) 00:12:39.514 7.156 - 7.185: 99.0962% ( 1) 00:12:39.514 7.185 - 7.215: 99.1069% ( 1) 00:12:39.514 7.564 - 7.622: 99.1175% ( 1) 00:12:39.514 7.622 - 7.680: 99.1281% ( 1) 00:12:39.514 7.913 - 7.971: 99.1388% ( 1) 00:12:39.514 8.204 - 8.262: 99.1494% ( 1) 00:12:39.514 16.989 - 17.105: 99.1600% ( 1) 00:12:39.514 53.993 - 54.225: 99.1707% ( 1) 00:12:39.514 3991.738 - 4021.527: 99.9894% ( 77) 00:12:39.514 4051.316 - 4081.105: 100.0000% ( 1) 00:12:39.514 00:12:39.514 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:39.514 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:39.514 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:39.514 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:39.514 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:39.773 [ 00:12:39.773 { 00:12:39.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:39.773 "subtype": "Discovery", 00:12:39.773 "listen_addresses": [], 00:12:39.773 "allow_any_host": true, 00:12:39.773 "hosts": [] 00:12:39.773 }, 00:12:39.773 { 00:12:39.773 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:39.773 "subtype": "NVMe", 00:12:39.773 "listen_addresses": [ 00:12:39.773 { 00:12:39.773 "trtype": "VFIOUSER", 00:12:39.773 "adrfam": "IPv4", 00:12:39.773 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:39.773 "trsvcid": "0" 00:12:39.773 } 00:12:39.773 ], 00:12:39.773 "allow_any_host": true, 00:12:39.774 "hosts": [], 00:12:39.774 "serial_number": "SPDK1", 00:12:39.774 "model_number": "SPDK bdev Controller", 00:12:39.774 "max_namespaces": 32, 00:12:39.774 "min_cntlid": 1, 00:12:39.774 "max_cntlid": 65519, 00:12:39.774 "namespaces": [ 00:12:39.774 { 00:12:39.774 "nsid": 1, 00:12:39.774 "bdev_name": "Malloc1", 00:12:39.774 "name": "Malloc1", 00:12:39.774 "nguid": "75040B5B58D149BFB4E889E02C690881", 00:12:39.774 "uuid": "75040b5b-58d1-49bf-b4e8-89e02c690881" 00:12:39.774 }, 00:12:39.774 { 00:12:39.774 "nsid": 2, 00:12:39.774 "bdev_name": "Malloc3", 00:12:39.774 "name": "Malloc3", 00:12:39.774 "nguid": "06D06762A18A498EBC804E60B4AF889F", 00:12:39.774 "uuid": "06d06762-a18a-498e-bc80-4e60b4af889f" 00:12:39.774 } 00:12:39.774 ] 00:12:39.774 }, 00:12:39.774 { 00:12:39.774 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:39.774 "subtype": "NVMe", 00:12:39.774 "listen_addresses": [ 00:12:39.774 { 00:12:39.774 "trtype": "VFIOUSER", 00:12:39.774 "adrfam": "IPv4", 00:12:39.774 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:39.774 "trsvcid": "0" 00:12:39.774 } 00:12:39.774 ], 00:12:39.774 "allow_any_host": true, 00:12:39.774 "hosts": [], 00:12:39.774 "serial_number": "SPDK2", 00:12:39.774 "model_number": "SPDK bdev Controller", 00:12:39.774 "max_namespaces": 32, 00:12:39.774 "min_cntlid": 1, 00:12:39.774 "max_cntlid": 65519, 00:12:39.774 "namespaces": [ 00:12:39.774 { 00:12:39.774 "nsid": 1, 00:12:39.774 "bdev_name": "Malloc2", 00:12:39.774 "name": "Malloc2", 00:12:39.774 "nguid": "65997B459C834B08858A42EBB593276D", 00:12:39.774 "uuid": "65997b45-9c83-4b08-858a-42ebb593276d" 00:12:39.774 } 00:12:39.774 ] 00:12:39.774 } 00:12:39.774 ] 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3854533 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:39.774 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:40.032 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.032 [2024-07-15 12:42:31.873158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.032 Malloc4 00:12:40.291 12:42:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:40.291 [2024-07-15 12:42:32.205119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.291 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.550 Asynchronous Event Request test 00:12:40.550 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.550 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.550 Registering asynchronous event callbacks... 00:12:40.550 Starting namespace attribute notice tests for all controllers... 00:12:40.550 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:40.550 aer_cb - Changed Namespace 00:12:40.550 Cleaning up... 00:12:40.550 [ 00:12:40.550 { 00:12:40.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.550 "subtype": "Discovery", 00:12:40.550 "listen_addresses": [], 00:12:40.550 "allow_any_host": true, 00:12:40.550 "hosts": [] 00:12:40.550 }, 00:12:40.550 { 00:12:40.550 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:40.550 "subtype": "NVMe", 00:12:40.550 "listen_addresses": [ 00:12:40.550 { 00:12:40.550 "trtype": "VFIOUSER", 00:12:40.550 "adrfam": "IPv4", 00:12:40.550 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:40.550 "trsvcid": "0" 00:12:40.550 } 00:12:40.550 ], 00:12:40.550 "allow_any_host": true, 00:12:40.550 "hosts": [], 00:12:40.550 "serial_number": "SPDK1", 00:12:40.550 "model_number": "SPDK bdev Controller", 00:12:40.550 "max_namespaces": 32, 00:12:40.550 "min_cntlid": 1, 00:12:40.550 "max_cntlid": 65519, 00:12:40.550 "namespaces": [ 00:12:40.550 { 00:12:40.550 "nsid": 1, 00:12:40.550 "bdev_name": "Malloc1", 00:12:40.550 "name": "Malloc1", 00:12:40.550 "nguid": "75040B5B58D149BFB4E889E02C690881", 00:12:40.550 "uuid": "75040b5b-58d1-49bf-b4e8-89e02c690881" 00:12:40.550 }, 00:12:40.550 { 00:12:40.550 "nsid": 2, 00:12:40.550 "bdev_name": "Malloc3", 00:12:40.550 "name": "Malloc3", 00:12:40.550 "nguid": "06D06762A18A498EBC804E60B4AF889F", 00:12:40.550 "uuid": "06d06762-a18a-498e-bc80-4e60b4af889f" 00:12:40.550 } 00:12:40.550 ] 00:12:40.550 }, 00:12:40.550 { 00:12:40.550 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:40.550 "subtype": "NVMe", 00:12:40.550 "listen_addresses": [ 00:12:40.550 { 00:12:40.550 "trtype": "VFIOUSER", 00:12:40.550 "adrfam": "IPv4", 00:12:40.550 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:40.550 "trsvcid": "0" 00:12:40.550 } 00:12:40.550 ], 00:12:40.550 "allow_any_host": true, 00:12:40.550 "hosts": [], 00:12:40.550 "serial_number": "SPDK2", 00:12:40.550 "model_number": "SPDK bdev Controller", 00:12:40.550 "max_namespaces": 32, 00:12:40.550 "min_cntlid": 1, 00:12:40.550 "max_cntlid": 65519, 00:12:40.550 "namespaces": [ 00:12:40.550 { 00:12:40.550 "nsid": 1, 00:12:40.550 "bdev_name": "Malloc2", 00:12:40.550 "name": "Malloc2", 00:12:40.550 "nguid": "65997B459C834B08858A42EBB593276D", 00:12:40.550 "uuid": "65997b45-9c83-4b08-858a-42ebb593276d" 00:12:40.550 }, 00:12:40.550 { 00:12:40.550 "nsid": 2, 00:12:40.550 "bdev_name": "Malloc4", 00:12:40.550 "name": "Malloc4", 00:12:40.550 "nguid": "572C30CDE47A456FBCF1A8635D6E57A6", 00:12:40.550 "uuid": "572c30cd-e47a-456f-bcf1-a8635d6e57a6" 00:12:40.550 } 00:12:40.550 ] 00:12:40.550 } 00:12:40.550 ] 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3854533 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3845346 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3845346 ']' 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3845346 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3845346 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3845346' 00:12:40.824 killing process with pid 3845346 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3845346 00:12:40.824 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3845346 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3854801 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3854801' 00:12:41.116 Process pid: 3854801 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3854801 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3854801 ']' 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.116 12:42:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:41.116 [2024-07-15 12:42:32.895665] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:41.116 [2024-07-15 12:42:32.896939] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:12:41.116 [2024-07-15 12:42:32.896988] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.116 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.116 [2024-07-15 12:42:32.979083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.374 [2024-07-15 12:42:33.067792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.374 [2024-07-15 12:42:33.067839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.374 [2024-07-15 12:42:33.067849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.374 [2024-07-15 12:42:33.067857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.374 [2024-07-15 12:42:33.067864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.374 [2024-07-15 12:42:33.067937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.375 [2024-07-15 12:42:33.068072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.375 [2024-07-15 12:42:33.068182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.375 [2024-07-15 12:42:33.068182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.375 [2024-07-15 12:42:33.149564] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:41.375 [2024-07-15 12:42:33.149982] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:41.375 [2024-07-15 12:42:33.150199] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:41.375 [2024-07-15 12:42:33.150223] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:41.375 [2024-07-15 12:42:33.150463] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:41.375 12:42:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.375 12:42:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:41.375 12:42:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:42.311 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:42.571 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:42.571 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:42.571 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.571 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:42.571 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:42.830 Malloc1 00:12:42.830 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:43.089 12:42:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:43.348 12:42:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:43.606 12:42:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:43.606 12:42:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:43.606 12:42:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:43.865 Malloc2 00:12:43.865 12:42:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:44.124 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:44.384 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3854801 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3854801 ']' 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3854801 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.645 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854801 00:12:44.903 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.903 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.903 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854801' 00:12:44.903 killing process with pid 3854801 00:12:44.903 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3854801 00:12:44.903 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3854801 00:12:45.163 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:45.163 12:42:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:45.163 00:12:45.163 real 0m53.137s 00:12:45.163 user 3m30.059s 00:12:45.163 sys 0m4.066s 00:12:45.163 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.163 12:42:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:45.163 ************************************ 00:12:45.163 END TEST nvmf_vfio_user 00:12:45.163 ************************************ 00:12:45.163 12:42:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.163 12:42:36 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:45.163 12:42:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.163 12:42:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.163 12:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.163 ************************************ 00:12:45.163 START TEST nvmf_vfio_user_nvme_compliance 00:12:45.163 ************************************ 00:12:45.163 12:42:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:45.163 * Looking for test storage... 00:12:45.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3855659 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3855659' 00:12:45.163 Process pid: 3855659 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:45.163 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3855659 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3855659 ']' 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.164 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:45.164 [2024-07-15 12:42:37.100884] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:12:45.164 [2024-07-15 12:42:37.100946] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.423 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.423 [2024-07-15 12:42:37.180511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:45.423 [2024-07-15 12:42:37.273734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.423 [2024-07-15 12:42:37.273776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.423 [2024-07-15 12:42:37.273787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.423 [2024-07-15 12:42:37.273795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.423 [2024-07-15 12:42:37.273802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.423 [2024-07-15 12:42:37.273855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.423 [2024-07-15 12:42:37.273966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.423 [2024-07-15 12:42:37.273967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.681 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.681 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:45.681 12:42:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.619 malloc0 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.619 12:42:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:46.619 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.878 00:12:46.878 00:12:46.878 CUnit - A unit testing framework for C - Version 2.1-3 00:12:46.878 http://cunit.sourceforge.net/ 00:12:46.878 00:12:46.878 00:12:46.878 Suite: nvme_compliance 00:12:46.878 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 12:42:38.627061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:46.878 [2024-07-15 12:42:38.628580] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:46.878 [2024-07-15 12:42:38.628604] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:46.878 [2024-07-15 12:42:38.628615] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:46.878 [2024-07-15 12:42:38.630098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:46.878 passed 00:12:46.878 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 12:42:38.732243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:46.878 [2024-07-15 12:42:38.735299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:46.878 passed 00:12:47.136 Test: admin_identify_ns ...[2024-07-15 12:42:38.838110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.136 [2024-07-15 12:42:38.902268] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:47.136 [2024-07-15 12:42:38.910274] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:47.136 [2024-07-15 12:42:38.934439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.136 passed 00:12:47.136 Test: admin_get_features_mandatory_features ...[2024-07-15 12:42:39.029399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.136 [2024-07-15 12:42:39.032424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.395 passed 00:12:47.395 Test: admin_get_features_optional_features ...[2024-07-15 12:42:39.132431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.395 [2024-07-15 12:42:39.136470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.395 passed 00:12:47.395 Test: admin_set_features_number_of_queues ...[2024-07-15 12:42:39.234630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.654 [2024-07-15 12:42:39.339359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.654 passed 00:12:47.654 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 12:42:39.438687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.654 [2024-07-15 12:42:39.441733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.654 passed 00:12:47.654 Test: admin_get_log_page_with_lpo ...[2024-07-15 12:42:39.540913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.913 [2024-07-15 12:42:39.609271] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:47.913 [2024-07-15 12:42:39.622334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.913 passed 00:12:47.913 Test: fabric_property_get ...[2024-07-15 12:42:39.718242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.913 [2024-07-15 12:42:39.719620] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:47.913 [2024-07-15 12:42:39.721274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.913 passed 00:12:47.913 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 12:42:39.820279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.913 [2024-07-15 12:42:39.821776] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:47.913 [2024-07-15 12:42:39.824336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.171 passed 00:12:48.171 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 12:42:39.922450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.171 [2024-07-15 12:42:40.008269] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.171 [2024-07-15 12:42:40.024273] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.171 [2024-07-15 12:42:40.029379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.171 passed 00:12:48.429 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 12:42:40.125442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.429 [2024-07-15 12:42:40.126931] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:48.429 [2024-07-15 12:42:40.129500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.429 passed 00:12:48.429 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 12:42:40.227590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.429 [2024-07-15 12:42:40.304279] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:48.430 [2024-07-15 12:42:40.328261] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.430 [2024-07-15 12:42:40.333391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.688 passed 00:12:48.688 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 12:42:40.430703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.688 [2024-07-15 12:42:40.432184] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:48.688 [2024-07-15 12:42:40.432233] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:48.688 [2024-07-15 12:42:40.433740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.688 passed 00:12:48.688 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 12:42:40.532051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.688 [2024-07-15 12:42:40.623270] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:48.945 [2024-07-15 12:42:40.631262] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:48.945 [2024-07-15 12:42:40.639262] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:48.946 [2024-07-15 12:42:40.647261] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:48.946 [2024-07-15 12:42:40.676367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.946 passed 00:12:48.946 Test: admin_create_io_sq_verify_pc ...[2024-07-15 12:42:40.776667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.946 [2024-07-15 12:42:40.791285] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:48.946 [2024-07-15 12:42:40.811926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.946 passed 00:12:49.203 Test: admin_create_io_qp_max_qps ...[2024-07-15 12:42:40.909938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.142 [2024-07-15 12:42:42.013266] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:50.710 [2024-07-15 12:42:42.394451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.710 passed 00:12:50.710 Test: admin_create_io_sq_shared_cq ...[2024-07-15 12:42:42.492733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.710 [2024-07-15 12:42:42.624263] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:50.968 [2024-07-15 12:42:42.661352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.968 passed 00:12:50.968 00:12:50.968 Run Summary: Type Total Ran Passed Failed Inactive 00:12:50.968 suites 1 1 n/a 0 0 00:12:50.968 tests 18 18 18 0 0 00:12:50.968 asserts 360 360 360 0 n/a 00:12:50.968 00:12:50.968 Elapsed time = 1.704 seconds 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3855659 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3855659 ']' 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3855659 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855659 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855659' 00:12:50.968 killing process with pid 3855659 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3855659 00:12:50.968 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3855659 00:12:51.227 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:51.227 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:51.227 00:12:51.227 real 0m6.059s 00:12:51.227 user 0m17.105s 00:12:51.227 sys 0m0.469s 00:12:51.227 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.227 12:42:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:51.227 ************************************ 00:12:51.227 END TEST nvmf_vfio_user_nvme_compliance 00:12:51.227 ************************************ 00:12:51.227 12:42:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:51.227 12:42:43 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:51.227 12:42:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.227 12:42:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.227 12:42:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:51.227 ************************************ 00:12:51.227 START TEST nvmf_vfio_user_fuzz 00:12:51.227 ************************************ 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:51.227 * Looking for test storage... 00:12:51.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.227 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3856764 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3856764' 00:12:51.485 Process pid: 3856764 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3856764 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3856764 ']' 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.485 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:51.744 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.744 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:51.744 12:42:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 malloc0 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:52.681 12:42:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:24.763 Fuzzing completed. Shutting down the fuzz application 00:13:24.763 00:13:24.763 Dumping successful admin opcodes: 00:13:24.763 8, 9, 10, 24, 00:13:24.763 Dumping successful io opcodes: 00:13:24.763 0, 00:13:24.763 NS: 0x200003a1ef00 I/O qp, Total commands completed: 560843, total successful commands: 2156, random_seed: 3767637760 00:13:24.763 NS: 0x200003a1ef00 admin qp, Total commands completed: 97943, total successful commands: 798, random_seed: 3150503744 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3856764 ']' 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3856764' 00:13:24.763 killing process with pid 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3856764 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:24.763 00:13:24.763 real 0m32.335s 00:13:24.763 user 0m32.788s 00:13:24.763 sys 0m19.085s 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.763 12:43:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.763 ************************************ 00:13:24.763 END TEST nvmf_vfio_user_fuzz 00:13:24.763 ************************************ 00:13:24.763 12:43:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.763 12:43:15 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:24.763 12:43:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.763 12:43:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.763 12:43:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.763 ************************************ 00:13:24.763 START TEST nvmf_host_management 00:13:24.763 ************************************ 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:24.763 * Looking for test storage... 00:13:24.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.763 12:43:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.764 12:43:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.037 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:30.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:30.038 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:30.038 Found net devices under 0000:af:00.0: cvl_0_0 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:30.038 Found net devices under 0000:af:00.1: cvl_0_1 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.038 12:43:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:13:30.038 00:13:30.038 --- 10.0.0.2 ping statistics --- 00:13:30.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.038 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:13:30.038 00:13:30.038 --- 10.0.0.1 ping statistics --- 00:13:30.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.038 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.038 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3865669 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3865669 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3865669 ']' 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.039 12:43:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.039 [2024-07-15 12:43:21.316174] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:13:30.039 [2024-07-15 12:43:21.316229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.039 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.039 [2024-07-15 12:43:21.403764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.039 [2024-07-15 12:43:21.508214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.039 [2024-07-15 12:43:21.508270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.039 [2024-07-15 12:43:21.508284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.039 [2024-07-15 12:43:21.508296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.039 [2024-07-15 12:43:21.508305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.039 [2024-07-15 12:43:21.508435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.039 [2024-07-15 12:43:21.508550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.039 [2024-07-15 12:43:21.508665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:30.039 [2024-07-15 12:43:21.508666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.607 [2024-07-15 12:43:22.306097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.607 Malloc0 00:13:30.607 [2024-07-15 12:43:22.376158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3865970 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3865970 /var/tmp/bdevperf.sock 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3865970 ']' 00:13:30.607 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:30.608 { 00:13:30.608 "params": { 00:13:30.608 "name": "Nvme$subsystem", 00:13:30.608 "trtype": "$TEST_TRANSPORT", 00:13:30.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.608 "adrfam": "ipv4", 00:13:30.608 "trsvcid": "$NVMF_PORT", 00:13:30.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.608 "hdgst": ${hdgst:-false}, 00:13:30.608 "ddgst": ${ddgst:-false} 00:13:30.608 }, 00:13:30.608 "method": "bdev_nvme_attach_controller" 00:13:30.608 } 00:13:30.608 EOF 00:13:30.608 )") 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:30.608 12:43:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:30.608 "params": { 00:13:30.608 "name": "Nvme0", 00:13:30.608 "trtype": "tcp", 00:13:30.608 "traddr": "10.0.0.2", 00:13:30.608 "adrfam": "ipv4", 00:13:30.608 "trsvcid": "4420", 00:13:30.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:30.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:30.608 "hdgst": false, 00:13:30.608 "ddgst": false 00:13:30.608 }, 00:13:30.608 "method": "bdev_nvme_attach_controller" 00:13:30.608 }' 00:13:30.608 [2024-07-15 12:43:22.473165] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:13:30.608 [2024-07-15 12:43:22.473224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865970 ] 00:13:30.608 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.867 [2024-07-15 12:43:22.554801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.867 [2024-07-15 12:43:22.640311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.126 Running I/O for 10 seconds... 00:13:31.126 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.126 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:31.126 12:43:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:31.126 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.126 12:43:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:31.126 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.385 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.645 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.645 [2024-07-15 12:43:23.360705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.360981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.360993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.645 [2024-07-15 12:43:23.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.645 [2024-07-15 12:43:23.361370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.361981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.361992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:31.646 [2024-07-15 12:43:23.362199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.646 [2024-07-15 12:43:23.362278] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a8e60 was disconnected and freed. reset controller. 00:13:31.646 [2024-07-15 12:43:23.363626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:31.646 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.646 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:31.646 task offset: 59392 on job bdev=Nvme0n1 fails 00:13:31.646 00:13:31.646 Latency(us) 00:13:31.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:31.646 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:31.646 Verification LBA range: start 0x0 length 0x400 00:13:31.646 Nvme0n1 : 0.44 1021.25 63.83 145.89 0.00 53017.95 2293.76 54335.30 00:13:31.646 =================================================================================================================== 00:13:31.646 Total : 1021.25 63.83 145.89 0.00 53017.95 2293.76 54335.30 00:13:31.646 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.646 [2024-07-15 12:43:23.365965] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:31.646 [2024-07-15 12:43:23.365986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76e90 (9): Bad file descriptor 00:13:31.646 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.646 [2024-07-15 12:43:23.372423] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:31.647 12:43:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.647 12:43:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3865970 00:13:32.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3865970) - No such process 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.583 { 00:13:32.583 "params": { 00:13:32.583 "name": "Nvme$subsystem", 00:13:32.583 "trtype": "$TEST_TRANSPORT", 00:13:32.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.583 "adrfam": "ipv4", 00:13:32.583 "trsvcid": "$NVMF_PORT", 00:13:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.583 "hdgst": ${hdgst:-false}, 00:13:32.583 "ddgst": ${ddgst:-false} 00:13:32.583 }, 00:13:32.583 "method": "bdev_nvme_attach_controller" 00:13:32.583 } 00:13:32.583 EOF 00:13:32.583 )") 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:32.583 12:43:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.583 "params": { 00:13:32.583 "name": "Nvme0", 00:13:32.583 "trtype": "tcp", 00:13:32.583 "traddr": "10.0.0.2", 00:13:32.583 "adrfam": "ipv4", 00:13:32.583 "trsvcid": "4420", 00:13:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:32.583 "hdgst": false, 00:13:32.583 "ddgst": false 00:13:32.583 }, 00:13:32.583 "method": "bdev_nvme_attach_controller" 00:13:32.583 }' 00:13:32.583 [2024-07-15 12:43:24.430649] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:13:32.583 [2024-07-15 12:43:24.430712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866255 ] 00:13:32.583 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.583 [2024-07-15 12:43:24.510988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.842 [2024-07-15 12:43:24.594564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.842 Running I/O for 1 seconds... 00:13:34.218 00:13:34.218 Latency(us) 00:13:34.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.218 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:34.218 Verification LBA range: start 0x0 length 0x400 00:13:34.218 Nvme0n1 : 1.03 1118.86 69.93 0.00 0.00 56146.15 7626.01 52905.43 00:13:34.218 =================================================================================================================== 00:13:34.218 Total : 1118.86 69.93 0.00 0.00 56146.15 7626.01 52905.43 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.218 12:43:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.218 rmmod nvme_tcp 00:13:34.218 rmmod nvme_fabrics 00:13:34.218 rmmod nvme_keyring 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3865669 ']' 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3865669 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3865669 ']' 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3865669 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3865669 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3865669' 00:13:34.218 killing process with pid 3865669 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3865669 00:13:34.218 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3865669 00:13:34.477 [2024-07-15 12:43:26.319983] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.477 12:43:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.014 12:43:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.014 12:43:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:37.014 00:13:37.014 real 0m12.954s 00:13:37.014 user 0m23.360s 00:13:37.014 sys 0m5.431s 00:13:37.014 12:43:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.014 12:43:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:37.014 ************************************ 00:13:37.014 END TEST nvmf_host_management 00:13:37.014 ************************************ 00:13:37.014 12:43:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:37.014 12:43:28 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:37.014 12:43:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.014 12:43:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.014 12:43:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.014 ************************************ 00:13:37.014 START TEST nvmf_lvol 00:13:37.014 ************************************ 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:37.014 * Looking for test storage... 00:13:37.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.014 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.015 12:43:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:42.351 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:42.351 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:42.351 Found net devices under 0000:af:00.0: cvl_0_0 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:42.351 Found net devices under 0000:af:00.1: cvl_0_1 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.351 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:13:42.610 00:13:42.610 --- 10.0.0.2 ping statistics --- 00:13:42.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.610 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:42.610 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:13:42.610 00:13:42.610 --- 10.0.0.1 ping statistics --- 00:13:42.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.610 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3870247 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3870247 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3870247 ']' 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.611 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 [2024-07-15 12:43:34.528817] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:13:42.611 [2024-07-15 12:43:34.528857] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.870 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.870 [2024-07-15 12:43:34.601252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.870 [2024-07-15 12:43:34.688627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.870 [2024-07-15 12:43:34.688674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.870 [2024-07-15 12:43:34.688688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.870 [2024-07-15 12:43:34.688697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.870 [2024-07-15 12:43:34.688704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.870 [2024-07-15 12:43:34.688761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.870 [2024-07-15 12:43:34.688872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.870 [2024-07-15 12:43:34.688873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.870 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.870 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:42.870 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.870 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.870 12:43:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:43.129 12:43:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.129 12:43:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.129 [2024-07-15 12:43:35.065997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.389 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.647 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:43.647 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.906 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:43.906 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:44.165 12:43:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:44.165 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4565ae7e-a5ac-48f3-b489-41cd1b2e881b 00:13:44.165 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4565ae7e-a5ac-48f3-b489-41cd1b2e881b lvol 20 00:13:44.424 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a8e7350b-67fa-4aa4-9a11-a787e03b68d2 00:13:44.424 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.684 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8e7350b-67fa-4aa4-9a11-a787e03b68d2 00:13:44.943 12:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.202 [2024-07-15 12:43:37.093128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.202 12:43:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.461 12:43:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3870804 00:13:45.461 12:43:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:45.461 12:43:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:45.720 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.655 12:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a8e7350b-67fa-4aa4-9a11-a787e03b68d2 MY_SNAPSHOT 00:13:46.912 12:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=65875911-04d1-4fab-b926-a4fc2b76ede2 00:13:46.912 12:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a8e7350b-67fa-4aa4-9a11-a787e03b68d2 30 00:13:47.170 12:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 65875911-04d1-4fab-b926-a4fc2b76ede2 MY_CLONE 00:13:47.429 12:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3f26fbd2-569c-4163-92df-1ce83f169bf4 00:13:47.429 12:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3f26fbd2-569c-4163-92df-1ce83f169bf4 00:13:48.364 12:43:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3870804 00:13:56.481 Initializing NVMe Controllers 00:13:56.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:56.481 Controller IO queue size 128, less than required. 00:13:56.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:56.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:56.481 Initialization complete. Launching workers. 00:13:56.481 ======================================================== 00:13:56.481 Latency(us) 00:13:56.481 Device Information : IOPS MiB/s Average min max 00:13:56.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7230.30 28.24 17711.48 1560.97 111289.88 00:13:56.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8923.80 34.86 14341.82 4681.04 78334.50 00:13:56.481 ======================================================== 00:13:56.481 Total : 16154.10 63.10 15850.02 1560.97 111289.88 00:13:56.481 00:13:56.481 12:43:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:56.481 12:43:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8e7350b-67fa-4aa4-9a11-a787e03b68d2 00:13:56.481 12:43:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4565ae7e-a5ac-48f3-b489-41cd1b2e881b 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.741 rmmod nvme_tcp 00:13:56.741 rmmod nvme_fabrics 00:13:56.741 rmmod nvme_keyring 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3870247 ']' 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3870247 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3870247 ']' 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3870247 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3870247 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3870247' 00:13:56.741 killing process with pid 3870247 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3870247 00:13:56.741 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3870247 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.000 12:43:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.534 12:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.534 00:13:59.534 real 0m22.434s 00:13:59.534 user 1m5.922s 00:13:59.534 sys 0m7.162s 00:13:59.534 12:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.534 12:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:59.534 ************************************ 00:13:59.534 END TEST nvmf_lvol 00:13:59.534 ************************************ 00:13:59.534 12:43:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:59.534 12:43:50 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:59.534 12:43:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.534 12:43:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.534 12:43:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.534 ************************************ 00:13:59.534 START TEST nvmf_lvs_grow 00:13:59.534 ************************************ 00:13:59.534 12:43:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:59.534 * Looking for test storage... 00:13:59.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.534 12:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:04.798 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:04.798 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:04.798 Found net devices under 0000:af:00.0: cvl_0_0 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:04.798 Found net devices under 0000:af:00.1: cvl_0_1 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.798 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.799 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:14:05.057 00:14:05.057 --- 10.0.0.2 ping statistics --- 00:14:05.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.057 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:14:05.057 00:14:05.057 --- 10.0.0.1 ping statistics --- 00:14:05.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.057 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.057 12:43:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.316 12:43:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3876367 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3876367 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3876367 ']' 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.316 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:05.316 [2024-07-15 12:43:57.062188] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:05.316 [2024-07-15 12:43:57.062262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.316 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.316 [2024-07-15 12:43:57.155644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.316 [2024-07-15 12:43:57.247373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.316 [2024-07-15 12:43:57.247413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.316 [2024-07-15 12:43:57.247423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.316 [2024-07-15 12:43:57.247433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.316 [2024-07-15 12:43:57.247441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.316 [2024-07-15 12:43:57.247469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:05.884 [2024-07-15 12:43:57.779536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.884 12:43:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:06.143 ************************************ 00:14:06.143 START TEST lvs_grow_clean 00:14:06.143 ************************************ 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.143 12:43:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:06.402 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:06.402 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:06.660 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a lvol 150 00:14:06.919 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c5eca91a-9c45-4c77-9501-c3653dbe77af 00:14:06.919 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.919 12:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:07.177 [2024-07-15 12:43:59.073864] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:07.177 [2024-07-15 12:43:59.073927] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:07.177 true 00:14:07.177 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:07.177 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:07.435 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:07.435 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:07.694 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c5eca91a-9c45-4c77-9501-c3653dbe77af 00:14:07.952 12:43:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:08.211 [2024-07-15 12:44:00.036843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.211 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3876979 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3876979 /var/tmp/bdevperf.sock 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3876979 ']' 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.470 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:08.470 [2024-07-15 12:44:00.336041] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:08.470 [2024-07-15 12:44:00.336098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876979 ] 00:14:08.470 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.728 [2024-07-15 12:44:00.417104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.728 [2024-07-15 12:44:00.521999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.728 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.728 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:08.728 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:09.294 Nvme0n1 00:14:09.294 12:44:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:09.294 [ 00:14:09.294 { 00:14:09.294 "name": "Nvme0n1", 00:14:09.294 "aliases": [ 00:14:09.294 "c5eca91a-9c45-4c77-9501-c3653dbe77af" 00:14:09.294 ], 00:14:09.294 "product_name": "NVMe disk", 00:14:09.294 "block_size": 4096, 00:14:09.294 "num_blocks": 38912, 00:14:09.294 "uuid": "c5eca91a-9c45-4c77-9501-c3653dbe77af", 00:14:09.294 "assigned_rate_limits": { 00:14:09.294 "rw_ios_per_sec": 0, 00:14:09.294 "rw_mbytes_per_sec": 0, 00:14:09.294 "r_mbytes_per_sec": 0, 00:14:09.294 "w_mbytes_per_sec": 0 00:14:09.294 }, 00:14:09.294 "claimed": false, 00:14:09.294 "zoned": false, 00:14:09.294 "supported_io_types": { 00:14:09.294 "read": true, 00:14:09.294 "write": true, 00:14:09.294 "unmap": true, 00:14:09.294 "flush": true, 00:14:09.294 "reset": true, 00:14:09.294 "nvme_admin": true, 00:14:09.294 "nvme_io": true, 00:14:09.294 "nvme_io_md": false, 00:14:09.294 "write_zeroes": true, 00:14:09.294 "zcopy": false, 00:14:09.294 "get_zone_info": false, 00:14:09.294 "zone_management": false, 00:14:09.294 "zone_append": false, 00:14:09.294 "compare": true, 00:14:09.294 "compare_and_write": true, 00:14:09.294 "abort": true, 00:14:09.294 "seek_hole": false, 00:14:09.294 "seek_data": false, 00:14:09.294 "copy": true, 00:14:09.294 "nvme_iov_md": false 00:14:09.294 }, 00:14:09.294 "memory_domains": [ 00:14:09.294 { 00:14:09.294 "dma_device_id": "system", 00:14:09.294 "dma_device_type": 1 00:14:09.294 } 00:14:09.294 ], 00:14:09.294 "driver_specific": { 00:14:09.294 "nvme": [ 00:14:09.294 { 00:14:09.294 "trid": { 00:14:09.294 "trtype": "TCP", 00:14:09.294 "adrfam": "IPv4", 00:14:09.294 "traddr": "10.0.0.2", 00:14:09.294 "trsvcid": "4420", 00:14:09.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:09.294 }, 00:14:09.294 "ctrlr_data": { 00:14:09.294 "cntlid": 1, 00:14:09.294 "vendor_id": "0x8086", 00:14:09.294 "model_number": "SPDK bdev Controller", 00:14:09.294 "serial_number": "SPDK0", 00:14:09.294 "firmware_revision": "24.09", 00:14:09.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.294 "oacs": { 00:14:09.294 "security": 0, 00:14:09.294 "format": 0, 00:14:09.294 "firmware": 0, 00:14:09.294 "ns_manage": 0 00:14:09.294 }, 00:14:09.294 "multi_ctrlr": true, 00:14:09.294 "ana_reporting": false 00:14:09.294 }, 00:14:09.294 "vs": { 00:14:09.294 "nvme_version": "1.3" 00:14:09.294 }, 00:14:09.294 "ns_data": { 00:14:09.294 "id": 1, 00:14:09.294 "can_share": true 00:14:09.294 } 00:14:09.294 } 00:14:09.294 ], 00:14:09.294 "mp_policy": "active_passive" 00:14:09.294 } 00:14:09.294 } 00:14:09.294 ] 00:14:09.294 12:44:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3877193 00:14:09.294 12:44:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:09.294 12:44:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.554 Running I/O for 10 seconds... 00:14:10.491 Latency(us) 00:14:10.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.491 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:14:10.491 =================================================================================================================== 00:14:10.491 Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:14:10.491 00:14:11.427 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:11.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.427 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:14:11.427 =================================================================================================================== 00:14:11.427 Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:14:11.427 00:14:11.685 true 00:14:11.685 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:11.685 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:11.943 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:11.943 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:11.943 12:44:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3877193 00:14:12.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.513 Nvme0n1 : 3.00 15346.33 59.95 0.00 0.00 0.00 0.00 0.00 00:14:12.513 =================================================================================================================== 00:14:12.513 Total : 15346.33 59.95 0.00 0.00 0.00 0.00 0.00 00:14:12.513 00:14:13.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.450 Nvme0n1 : 4.00 15383.25 60.09 0.00 0.00 0.00 0.00 0.00 00:14:13.450 =================================================================================================================== 00:14:13.450 Total : 15383.25 60.09 0.00 0.00 0.00 0.00 0.00 00:14:13.450 00:14:14.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.830 Nvme0n1 : 5.00 15405.40 60.18 0.00 0.00 0.00 0.00 0.00 00:14:14.830 =================================================================================================================== 00:14:14.830 Total : 15405.40 60.18 0.00 0.00 0.00 0.00 0.00 00:14:14.830 00:14:15.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.766 Nvme0n1 : 6.00 15423.00 60.25 0.00 0.00 0.00 0.00 0.00 00:14:15.766 =================================================================================================================== 00:14:15.766 Total : 15423.00 60.25 0.00 0.00 0.00 0.00 0.00 00:14:15.766 00:14:16.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.703 Nvme0n1 : 7.00 15442.86 60.32 0.00 0.00 0.00 0.00 0.00 00:14:16.703 =================================================================================================================== 00:14:16.703 Total : 15442.86 60.32 0.00 0.00 0.00 0.00 0.00 00:14:16.703 00:14:17.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.689 Nvme0n1 : 8.00 15457.25 60.38 0.00 0.00 0.00 0.00 0.00 00:14:17.689 =================================================================================================================== 00:14:17.689 Total : 15457.25 60.38 0.00 0.00 0.00 0.00 0.00 00:14:17.689 00:14:18.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.666 Nvme0n1 : 9.00 15468.78 60.42 0.00 0.00 0.00 0.00 0.00 00:14:18.666 =================================================================================================================== 00:14:18.666 Total : 15468.78 60.42 0.00 0.00 0.00 0.00 0.00 00:14:18.666 00:14:19.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.601 Nvme0n1 : 10.00 15478.00 60.46 0.00 0.00 0.00 0.00 0.00 00:14:19.601 =================================================================================================================== 00:14:19.601 Total : 15478.00 60.46 0.00 0.00 0.00 0.00 0.00 00:14:19.601 00:14:19.601 00:14:19.601 Latency(us) 00:14:19.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.601 Nvme0n1 : 10.01 15481.69 60.48 0.00 0.00 8261.30 4736.47 16920.20 00:14:19.601 =================================================================================================================== 00:14:19.601 Total : 15481.69 60.48 0.00 0.00 8261.30 4736.47 16920.20 00:14:19.601 0 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3876979 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3876979 ']' 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3876979 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3876979 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3876979' 00:14:19.601 killing process with pid 3876979 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3876979 00:14:19.601 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.601 00:14:19.601 Latency(us) 00:14:19.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.601 =================================================================================================================== 00:14:19.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.601 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3876979 00:14:19.859 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.117 12:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.375 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:20.375 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:20.634 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:20.634 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:20.634 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:20.892 [2024-07-15 12:44:12.657340] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:20.892 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:21.150 request: 00:14:21.150 { 00:14:21.150 "uuid": "8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a", 00:14:21.150 "method": "bdev_lvol_get_lvstores", 00:14:21.150 "req_id": 1 00:14:21.150 } 00:14:21.150 Got JSON-RPC error response 00:14:21.150 response: 00:14:21.150 { 00:14:21.150 "code": -19, 00:14:21.150 "message": "No such device" 00:14:21.150 } 00:14:21.150 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:21.150 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:21.150 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:21.150 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:21.150 12:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:21.408 aio_bdev 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c5eca91a-9c45-4c77-9501-c3653dbe77af 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c5eca91a-9c45-4c77-9501-c3653dbe77af 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:21.408 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:21.666 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c5eca91a-9c45-4c77-9501-c3653dbe77af -t 2000 00:14:21.923 [ 00:14:21.923 { 00:14:21.923 "name": "c5eca91a-9c45-4c77-9501-c3653dbe77af", 00:14:21.924 "aliases": [ 00:14:21.924 "lvs/lvol" 00:14:21.924 ], 00:14:21.924 "product_name": "Logical Volume", 00:14:21.924 "block_size": 4096, 00:14:21.924 "num_blocks": 38912, 00:14:21.924 "uuid": "c5eca91a-9c45-4c77-9501-c3653dbe77af", 00:14:21.924 "assigned_rate_limits": { 00:14:21.924 "rw_ios_per_sec": 0, 00:14:21.924 "rw_mbytes_per_sec": 0, 00:14:21.924 "r_mbytes_per_sec": 0, 00:14:21.924 "w_mbytes_per_sec": 0 00:14:21.924 }, 00:14:21.924 "claimed": false, 00:14:21.924 "zoned": false, 00:14:21.924 "supported_io_types": { 00:14:21.924 "read": true, 00:14:21.924 "write": true, 00:14:21.924 "unmap": true, 00:14:21.924 "flush": false, 00:14:21.924 "reset": true, 00:14:21.924 "nvme_admin": false, 00:14:21.924 "nvme_io": false, 00:14:21.924 "nvme_io_md": false, 00:14:21.924 "write_zeroes": true, 00:14:21.924 "zcopy": false, 00:14:21.924 "get_zone_info": false, 00:14:21.924 "zone_management": false, 00:14:21.924 "zone_append": false, 00:14:21.924 "compare": false, 00:14:21.924 "compare_and_write": false, 00:14:21.924 "abort": false, 00:14:21.924 "seek_hole": true, 00:14:21.924 "seek_data": true, 00:14:21.924 "copy": false, 00:14:21.924 "nvme_iov_md": false 00:14:21.924 }, 00:14:21.924 "driver_specific": { 00:14:21.924 "lvol": { 00:14:21.924 "lvol_store_uuid": "8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a", 00:14:21.924 "base_bdev": "aio_bdev", 00:14:21.924 "thin_provision": false, 00:14:21.924 "num_allocated_clusters": 38, 00:14:21.924 "snapshot": false, 00:14:21.924 "clone": false, 00:14:21.924 "esnap_clone": false 00:14:21.924 } 00:14:21.924 } 00:14:21.924 } 00:14:21.924 ] 00:14:21.924 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:21.924 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:21.924 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:22.182 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:22.182 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:22.182 12:44:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:22.440 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:22.440 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c5eca91a-9c45-4c77-9501-c3653dbe77af 00:14:22.698 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8af08d83-4cb0-42bb-ba1a-5aef1fb5af9a 00:14:22.957 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:23.216 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.217 00:14:23.217 real 0m17.149s 00:14:23.217 user 0m16.978s 00:14:23.217 sys 0m1.496s 00:14:23.217 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.217 12:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:23.217 ************************************ 00:14:23.217 END TEST lvs_grow_clean 00:14:23.217 ************************************ 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:23.217 ************************************ 00:14:23.217 START TEST lvs_grow_dirty 00:14:23.217 ************************************ 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.217 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.476 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:23.476 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:23.734 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:23.734 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:23.734 12:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:24.300 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:24.300 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:24.300 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 lvol 150 00:14:24.867 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:24.867 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:24.867 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:25.125 [2024-07-15 12:44:16.927644] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:25.125 [2024-07-15 12:44:16.927707] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:25.125 true 00:14:25.125 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:25.125 12:44:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:25.693 12:44:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:25.693 12:44:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:25.951 12:44:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:26.209 12:44:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:26.468 [2024-07-15 12:44:18.267668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.468 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.726 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3880391 00:14:26.726 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:26.726 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.726 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3880391 /var/tmp/bdevperf.sock 00:14:26.726 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3880391 ']' 00:14:26.727 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.727 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.727 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.727 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.727 12:44:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:26.727 [2024-07-15 12:44:18.579615] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:26.727 [2024-07-15 12:44:18.579687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880391 ] 00:14:26.727 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.985 [2024-07-15 12:44:18.667984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.985 [2024-07-15 12:44:18.773246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.921 12:44:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.921 12:44:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:27.921 12:44:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:28.180 Nvme0n1 00:14:28.439 12:44:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:28.439 [ 00:14:28.439 { 00:14:28.439 "name": "Nvme0n1", 00:14:28.439 "aliases": [ 00:14:28.439 "54e2dd42-ae7e-445d-bf43-8fd1a1723932" 00:14:28.439 ], 00:14:28.439 "product_name": "NVMe disk", 00:14:28.439 "block_size": 4096, 00:14:28.439 "num_blocks": 38912, 00:14:28.439 "uuid": "54e2dd42-ae7e-445d-bf43-8fd1a1723932", 00:14:28.439 "assigned_rate_limits": { 00:14:28.439 "rw_ios_per_sec": 0, 00:14:28.439 "rw_mbytes_per_sec": 0, 00:14:28.439 "r_mbytes_per_sec": 0, 00:14:28.439 "w_mbytes_per_sec": 0 00:14:28.439 }, 00:14:28.439 "claimed": false, 00:14:28.439 "zoned": false, 00:14:28.439 "supported_io_types": { 00:14:28.439 "read": true, 00:14:28.439 "write": true, 00:14:28.439 "unmap": true, 00:14:28.439 "flush": true, 00:14:28.439 "reset": true, 00:14:28.439 "nvme_admin": true, 00:14:28.439 "nvme_io": true, 00:14:28.439 "nvme_io_md": false, 00:14:28.439 "write_zeroes": true, 00:14:28.439 "zcopy": false, 00:14:28.439 "get_zone_info": false, 00:14:28.439 "zone_management": false, 00:14:28.439 "zone_append": false, 00:14:28.439 "compare": true, 00:14:28.439 "compare_and_write": true, 00:14:28.439 "abort": true, 00:14:28.439 "seek_hole": false, 00:14:28.439 "seek_data": false, 00:14:28.439 "copy": true, 00:14:28.439 "nvme_iov_md": false 00:14:28.439 }, 00:14:28.439 "memory_domains": [ 00:14:28.439 { 00:14:28.439 "dma_device_id": "system", 00:14:28.439 "dma_device_type": 1 00:14:28.439 } 00:14:28.439 ], 00:14:28.439 "driver_specific": { 00:14:28.439 "nvme": [ 00:14:28.439 { 00:14:28.439 "trid": { 00:14:28.439 "trtype": "TCP", 00:14:28.439 "adrfam": "IPv4", 00:14:28.439 "traddr": "10.0.0.2", 00:14:28.439 "trsvcid": "4420", 00:14:28.439 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:28.439 }, 00:14:28.439 "ctrlr_data": { 00:14:28.439 "cntlid": 1, 00:14:28.439 "vendor_id": "0x8086", 00:14:28.439 "model_number": "SPDK bdev Controller", 00:14:28.439 "serial_number": "SPDK0", 00:14:28.439 "firmware_revision": "24.09", 00:14:28.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:28.439 "oacs": { 00:14:28.439 "security": 0, 00:14:28.439 "format": 0, 00:14:28.439 "firmware": 0, 00:14:28.439 "ns_manage": 0 00:14:28.439 }, 00:14:28.439 "multi_ctrlr": true, 00:14:28.439 "ana_reporting": false 00:14:28.439 }, 00:14:28.439 "vs": { 00:14:28.439 "nvme_version": "1.3" 00:14:28.439 }, 00:14:28.439 "ns_data": { 00:14:28.439 "id": 1, 00:14:28.439 "can_share": true 00:14:28.439 } 00:14:28.439 } 00:14:28.439 ], 00:14:28.439 "mp_policy": "active_passive" 00:14:28.439 } 00:14:28.439 } 00:14:28.439 ] 00:14:28.698 12:44:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3880664 00:14:28.698 12:44:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:28.698 12:44:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.698 Running I/O for 10 seconds... 00:14:29.635 Latency(us) 00:14:29.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.635 Nvme0n1 : 1.00 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:14:29.635 =================================================================================================================== 00:14:29.635 Total : 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:14:29.635 00:14:30.572 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:30.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.572 Nvme0n1 : 2.00 15337.50 59.91 0.00 0.00 0.00 0.00 0.00 00:14:30.572 =================================================================================================================== 00:14:30.572 Total : 15337.50 59.91 0.00 0.00 0.00 0.00 0.00 00:14:30.572 00:14:30.831 true 00:14:30.831 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:30.831 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:31.090 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:31.090 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:31.090 12:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3880664 00:14:31.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.657 Nvme0n1 : 3.00 15389.67 60.12 0.00 0.00 0.00 0.00 0.00 00:14:31.657 =================================================================================================================== 00:14:31.657 Total : 15389.67 60.12 0.00 0.00 0.00 0.00 0.00 00:14:31.657 00:14:32.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.594 Nvme0n1 : 4.00 15416.25 60.22 0.00 0.00 0.00 0.00 0.00 00:14:32.594 =================================================================================================================== 00:14:32.594 Total : 15416.25 60.22 0.00 0.00 0.00 0.00 0.00 00:14:32.594 00:14:33.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.972 Nvme0n1 : 5.00 15431.80 60.28 0.00 0.00 0.00 0.00 0.00 00:14:33.972 =================================================================================================================== 00:14:33.972 Total : 15431.80 60.28 0.00 0.00 0.00 0.00 0.00 00:14:33.972 00:14:34.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.909 Nvme0n1 : 6.00 15452.83 60.36 0.00 0.00 0.00 0.00 0.00 00:14:34.909 =================================================================================================================== 00:14:34.909 Total : 15452.83 60.36 0.00 0.00 0.00 0.00 0.00 00:14:34.909 00:14:35.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.846 Nvme0n1 : 7.00 15467.86 60.42 0.00 0.00 0.00 0.00 0.00 00:14:35.846 =================================================================================================================== 00:14:35.846 Total : 15467.86 60.42 0.00 0.00 0.00 0.00 0.00 00:14:35.846 00:14:36.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.782 Nvme0n1 : 8.00 15487.00 60.50 0.00 0.00 0.00 0.00 0.00 00:14:36.782 =================================================================================================================== 00:14:36.782 Total : 15487.00 60.50 0.00 0.00 0.00 0.00 0.00 00:14:36.782 00:14:37.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.719 Nvme0n1 : 9.00 15498.44 60.54 0.00 0.00 0.00 0.00 0.00 00:14:37.719 =================================================================================================================== 00:14:37.719 Total : 15498.44 60.54 0.00 0.00 0.00 0.00 0.00 00:14:37.720 00:14:38.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.656 Nvme0n1 : 10.00 15498.30 60.54 0.00 0.00 0.00 0.00 0.00 00:14:38.656 =================================================================================================================== 00:14:38.656 Total : 15498.30 60.54 0.00 0.00 0.00 0.00 0.00 00:14:38.656 00:14:38.656 00:14:38.656 Latency(us) 00:14:38.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.656 Nvme0n1 : 10.01 15494.75 60.53 0.00 0.00 8254.61 3902.37 16681.89 00:14:38.656 =================================================================================================================== 00:14:38.656 Total : 15494.75 60.53 0.00 0.00 8254.61 3902.37 16681.89 00:14:38.656 0 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3880391 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3880391 ']' 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3880391 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880391 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880391' 00:14:38.656 killing process with pid 3880391 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3880391 00:14:38.656 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.656 00:14:38.656 Latency(us) 00:14:38.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.656 =================================================================================================================== 00:14:38.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.656 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3880391 00:14:38.915 12:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.173 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:39.433 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:39.433 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3876367 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3876367 00:14:39.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3876367 Killed "${NVMF_APP[@]}" "$@" 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3882755 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3882755 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3882755 ']' 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.693 12:44:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:39.953 [2024-07-15 12:44:31.663379] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:39.953 [2024-07-15 12:44:31.663439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.953 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.953 [2024-07-15 12:44:31.750727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.953 [2024-07-15 12:44:31.839933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.953 [2024-07-15 12:44:31.839971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.953 [2024-07-15 12:44:31.839981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.953 [2024-07-15 12:44:31.839989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.953 [2024-07-15 12:44:31.839997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.953 [2024-07-15 12:44:31.840017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.889 12:44:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.457 [2024-07-15 12:44:33.101862] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:41.457 [2024-07-15 12:44:33.101969] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:41.457 [2024-07-15 12:44:33.102006] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:41.457 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54e2dd42-ae7e-445d-bf43-8fd1a1723932 -t 2000 00:14:41.715 [ 00:14:41.715 { 00:14:41.715 "name": "54e2dd42-ae7e-445d-bf43-8fd1a1723932", 00:14:41.715 "aliases": [ 00:14:41.715 "lvs/lvol" 00:14:41.715 ], 00:14:41.715 "product_name": "Logical Volume", 00:14:41.715 "block_size": 4096, 00:14:41.715 "num_blocks": 38912, 00:14:41.715 "uuid": "54e2dd42-ae7e-445d-bf43-8fd1a1723932", 00:14:41.715 "assigned_rate_limits": { 00:14:41.715 "rw_ios_per_sec": 0, 00:14:41.715 "rw_mbytes_per_sec": 0, 00:14:41.715 "r_mbytes_per_sec": 0, 00:14:41.715 "w_mbytes_per_sec": 0 00:14:41.715 }, 00:14:41.715 "claimed": false, 00:14:41.715 "zoned": false, 00:14:41.715 "supported_io_types": { 00:14:41.715 "read": true, 00:14:41.715 "write": true, 00:14:41.715 "unmap": true, 00:14:41.715 "flush": false, 00:14:41.715 "reset": true, 00:14:41.715 "nvme_admin": false, 00:14:41.715 "nvme_io": false, 00:14:41.715 "nvme_io_md": false, 00:14:41.715 "write_zeroes": true, 00:14:41.715 "zcopy": false, 00:14:41.715 "get_zone_info": false, 00:14:41.715 "zone_management": false, 00:14:41.715 "zone_append": false, 00:14:41.715 "compare": false, 00:14:41.715 "compare_and_write": false, 00:14:41.715 "abort": false, 00:14:41.715 "seek_hole": true, 00:14:41.715 "seek_data": true, 00:14:41.715 "copy": false, 00:14:41.715 "nvme_iov_md": false 00:14:41.715 }, 00:14:41.715 "driver_specific": { 00:14:41.715 "lvol": { 00:14:41.715 "lvol_store_uuid": "6c4cb54e-47ea-455d-85f6-92ff6738c067", 00:14:41.715 "base_bdev": "aio_bdev", 00:14:41.715 "thin_provision": false, 00:14:41.715 "num_allocated_clusters": 38, 00:14:41.715 "snapshot": false, 00:14:41.715 "clone": false, 00:14:41.715 "esnap_clone": false 00:14:41.715 } 00:14:41.715 } 00:14:41.715 } 00:14:41.715 ] 00:14:41.715 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:41.715 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:41.715 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:41.974 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:41.974 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:41.974 12:44:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:42.232 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:42.232 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.491 [2024-07-15 12:44:34.431035] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.749 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:43.008 request: 00:14:43.008 { 00:14:43.008 "uuid": "6c4cb54e-47ea-455d-85f6-92ff6738c067", 00:14:43.008 "method": "bdev_lvol_get_lvstores", 00:14:43.008 "req_id": 1 00:14:43.008 } 00:14:43.008 Got JSON-RPC error response 00:14:43.008 response: 00:14:43.008 { 00:14:43.008 "code": -19, 00:14:43.008 "message": "No such device" 00:14:43.008 } 00:14:43.266 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:43.266 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.266 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.266 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.267 12:44:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.267 aio_bdev 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.526 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54e2dd42-ae7e-445d-bf43-8fd1a1723932 -t 2000 00:14:43.785 [ 00:14:43.785 { 00:14:43.785 "name": "54e2dd42-ae7e-445d-bf43-8fd1a1723932", 00:14:43.785 "aliases": [ 00:14:43.785 "lvs/lvol" 00:14:43.785 ], 00:14:43.785 "product_name": "Logical Volume", 00:14:43.785 "block_size": 4096, 00:14:43.785 "num_blocks": 38912, 00:14:43.785 "uuid": "54e2dd42-ae7e-445d-bf43-8fd1a1723932", 00:14:43.785 "assigned_rate_limits": { 00:14:43.785 "rw_ios_per_sec": 0, 00:14:43.785 "rw_mbytes_per_sec": 0, 00:14:43.785 "r_mbytes_per_sec": 0, 00:14:43.785 "w_mbytes_per_sec": 0 00:14:43.785 }, 00:14:43.785 "claimed": false, 00:14:43.785 "zoned": false, 00:14:43.785 "supported_io_types": { 00:14:43.785 "read": true, 00:14:43.785 "write": true, 00:14:43.785 "unmap": true, 00:14:43.785 "flush": false, 00:14:43.785 "reset": true, 00:14:43.785 "nvme_admin": false, 00:14:43.785 "nvme_io": false, 00:14:43.785 "nvme_io_md": false, 00:14:43.785 "write_zeroes": true, 00:14:43.785 "zcopy": false, 00:14:43.785 "get_zone_info": false, 00:14:43.785 "zone_management": false, 00:14:43.785 "zone_append": false, 00:14:43.785 "compare": false, 00:14:43.785 "compare_and_write": false, 00:14:43.785 "abort": false, 00:14:43.785 "seek_hole": true, 00:14:43.785 "seek_data": true, 00:14:43.785 "copy": false, 00:14:43.785 "nvme_iov_md": false 00:14:43.785 }, 00:14:43.785 "driver_specific": { 00:14:43.785 "lvol": { 00:14:43.785 "lvol_store_uuid": "6c4cb54e-47ea-455d-85f6-92ff6738c067", 00:14:43.785 "base_bdev": "aio_bdev", 00:14:43.785 "thin_provision": false, 00:14:43.785 "num_allocated_clusters": 38, 00:14:43.785 "snapshot": false, 00:14:43.785 "clone": false, 00:14:43.785 "esnap_clone": false 00:14:43.785 } 00:14:43.785 } 00:14:43.785 } 00:14:43.785 ] 00:14:43.785 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:43.785 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:43.785 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:44.043 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:44.043 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:44.043 12:44:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:44.301 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:44.301 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 54e2dd42-ae7e-445d-bf43-8fd1a1723932 00:14:44.560 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c4cb54e-47ea-455d-85f6-92ff6738c067 00:14:44.816 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.073 00:14:45.073 real 0m21.894s 00:14:45.073 user 0m53.732s 00:14:45.073 sys 0m3.764s 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 ************************************ 00:14:45.073 END TEST lvs_grow_dirty 00:14:45.073 ************************************ 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:45.073 12:44:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:45.073 nvmf_trace.0 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.349 rmmod nvme_tcp 00:14:45.349 rmmod nvme_fabrics 00:14:45.349 rmmod nvme_keyring 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3882755 ']' 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3882755 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3882755 ']' 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3882755 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3882755 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3882755' 00:14:45.349 killing process with pid 3882755 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3882755 00:14:45.349 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3882755 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.665 12:44:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.600 12:44:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.600 00:14:47.600 real 0m48.444s 00:14:47.600 user 1m18.699s 00:14:47.600 sys 0m10.139s 00:14:47.600 12:44:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.600 12:44:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:47.600 ************************************ 00:14:47.600 END TEST nvmf_lvs_grow 00:14:47.600 ************************************ 00:14:47.600 12:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:47.600 12:44:39 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.600 12:44:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.600 12:44:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.600 12:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.600 ************************************ 00:14:47.600 START TEST nvmf_bdev_io_wait 00:14:47.600 ************************************ 00:14:47.600 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.858 * Looking for test storage... 00:14:47.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:47.858 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.859 12:44:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.425 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.425 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.425 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.426 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.426 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:14:54.426 00:14:54.426 --- 10.0.0.2 ping statistics --- 00:14:54.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.426 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:14:54.426 00:14:54.426 --- 10.0.0.1 ping statistics --- 00:14:54.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.426 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3887353 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3887353 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3887353 ']' 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.426 12:44:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.426 [2024-07-15 12:44:45.553863] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:54.426 [2024-07-15 12:44:45.553924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.426 [2024-07-15 12:44:45.639412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.426 [2024-07-15 12:44:45.731967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.426 [2024-07-15 12:44:45.732012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.426 [2024-07-15 12:44:45.732022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.426 [2024-07-15 12:44:45.732031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.426 [2024-07-15 12:44:45.732038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.426 [2024-07-15 12:44:45.732089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.426 [2024-07-15 12:44:45.732202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.426 [2024-07-15 12:44:45.732312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.426 [2024-07-15 12:44:45.732312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.684 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.684 [2024-07-15 12:44:46.614039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 Malloc0 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 [2024-07-15 12:44:46.678090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.943 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3887634 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3887636 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.944 { 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme$subsystem", 00:14:54.944 "trtype": "$TEST_TRANSPORT", 00:14:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "$NVMF_PORT", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.944 "hdgst": ${hdgst:-false}, 00:14:54.944 "ddgst": ${ddgst:-false} 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 } 00:14:54.944 EOF 00:14:54.944 )") 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3887638 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3887641 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.944 { 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme$subsystem", 00:14:54.944 "trtype": "$TEST_TRANSPORT", 00:14:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "$NVMF_PORT", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.944 "hdgst": ${hdgst:-false}, 00:14:54.944 "ddgst": ${ddgst:-false} 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 } 00:14:54.944 EOF 00:14:54.944 )") 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.944 { 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme$subsystem", 00:14:54.944 "trtype": "$TEST_TRANSPORT", 00:14:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "$NVMF_PORT", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.944 "hdgst": ${hdgst:-false}, 00:14:54.944 "ddgst": ${ddgst:-false} 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 } 00:14:54.944 EOF 00:14:54.944 )") 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.944 { 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme$subsystem", 00:14:54.944 "trtype": "$TEST_TRANSPORT", 00:14:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "$NVMF_PORT", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.944 "hdgst": ${hdgst:-false}, 00:14:54.944 "ddgst": ${ddgst:-false} 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 } 00:14:54.944 EOF 00:14:54.944 )") 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3887634 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme1", 00:14:54.944 "trtype": "tcp", 00:14:54.944 "traddr": "10.0.0.2", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "4420", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.944 "hdgst": false, 00:14:54.944 "ddgst": false 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 }' 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme1", 00:14:54.944 "trtype": "tcp", 00:14:54.944 "traddr": "10.0.0.2", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "4420", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.944 "hdgst": false, 00:14:54.944 "ddgst": false 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 }' 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme1", 00:14:54.944 "trtype": "tcp", 00:14:54.944 "traddr": "10.0.0.2", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "4420", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.944 "hdgst": false, 00:14:54.944 "ddgst": false 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 }' 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.944 12:44:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.944 "params": { 00:14:54.944 "name": "Nvme1", 00:14:54.944 "trtype": "tcp", 00:14:54.944 "traddr": "10.0.0.2", 00:14:54.944 "adrfam": "ipv4", 00:14:54.944 "trsvcid": "4420", 00:14:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.944 "hdgst": false, 00:14:54.944 "ddgst": false 00:14:54.944 }, 00:14:54.944 "method": "bdev_nvme_attach_controller" 00:14:54.944 }' 00:14:54.944 [2024-07-15 12:44:46.730647] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:54.944 [2024-07-15 12:44:46.730710] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:54.944 [2024-07-15 12:44:46.733974] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:54.944 [2024-07-15 12:44:46.734029] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:54.944 [2024-07-15 12:44:46.735291] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:54.944 [2024-07-15 12:44:46.735291] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:14:54.944 [2024-07-15 12:44:46.735349] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 12:44:46.735350] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:54.944 --proc-type=auto ] 00:14:54.944 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.944 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.203 [2024-07-15 12:44:46.951437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.203 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.203 [2024-07-15 12:44:47.016447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.203 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.203 [2024-07-15 12:44:47.070912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.203 [2024-07-15 12:44:47.092624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:55.203 [2024-07-15 12:44:47.125994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:55.203 [2024-07-15 12:44:47.132007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.461 [2024-07-15 12:44:47.160202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:55.461 [2024-07-15 12:44:47.221464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:55.461 Running I/O for 1 seconds... 00:14:55.461 Running I/O for 1 seconds... 00:14:55.461 Running I/O for 1 seconds... 00:14:55.461 Running I/O for 1 seconds... 00:14:56.838 00:14:56.838 Latency(us) 00:14:56.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.838 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:56.838 Nvme1n1 : 1.01 8201.87 32.04 0.00 0.00 15522.47 5987.61 19899.11 00:14:56.838 =================================================================================================================== 00:14:56.838 Total : 8201.87 32.04 0.00 0.00 15522.47 5987.61 19899.11 00:14:56.838 00:14:56.838 Latency(us) 00:14:56.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.838 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:56.838 Nvme1n1 : 1.01 5845.19 22.83 0.00 0.00 21757.19 12392.26 32410.53 00:14:56.838 =================================================================================================================== 00:14:56.838 Total : 5845.19 22.83 0.00 0.00 21757.19 12392.26 32410.53 00:14:56.838 00:14:56.838 Latency(us) 00:14:56.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.838 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:56.838 Nvme1n1 : 1.00 162160.92 633.44 0.00 0.00 786.55 309.06 912.29 00:14:56.838 =================================================================================================================== 00:14:56.838 Total : 162160.92 633.44 0.00 0.00 786.55 309.06 912.29 00:14:56.838 00:14:56.838 Latency(us) 00:14:56.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.838 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:56.838 Nvme1n1 : 1.01 4475.73 17.48 0.00 0.00 28424.00 11915.64 49092.42 00:14:56.838 =================================================================================================================== 00:14:56.838 Total : 4475.73 17.48 0.00 0.00 28424.00 11915.64 49092.42 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3887636 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3887638 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3887641 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.097 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.098 rmmod nvme_tcp 00:14:57.098 rmmod nvme_fabrics 00:14:57.098 rmmod nvme_keyring 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3887353 ']' 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3887353 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3887353 ']' 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3887353 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3887353 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3887353' 00:14:57.098 killing process with pid 3887353 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3887353 00:14:57.098 12:44:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3887353 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.357 12:44:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.263 12:44:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.263 00:14:59.263 real 0m11.679s 00:14:59.263 user 0m21.182s 00:14:59.263 sys 0m6.123s 00:14:59.263 12:44:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.263 12:44:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.263 ************************************ 00:14:59.263 END TEST nvmf_bdev_io_wait 00:14:59.263 ************************************ 00:14:59.522 12:44:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.522 12:44:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:59.522 12:44:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.522 12:44:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.522 12:44:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.522 ************************************ 00:14:59.522 START TEST nvmf_queue_depth 00:14:59.522 ************************************ 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:59.522 * Looking for test storage... 00:14:59.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.522 12:44:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.523 12:44:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:06.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:06.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:06.095 Found net devices under 0000:af:00.0: cvl_0_0 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:06.095 Found net devices under 0000:af:00.1: cvl_0_1 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.095 12:44:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:15:06.095 00:15:06.095 --- 10.0.0.2 ping statistics --- 00:15:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.095 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:15:06.095 00:15:06.095 --- 10.0.0.1 ping statistics --- 00:15:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.095 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.095 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3891650 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3891650 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3891650 ']' 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.096 12:44:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.096 [2024-07-15 12:44:57.269365] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:15:06.096 [2024-07-15 12:44:57.269475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.096 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.096 [2024-07-15 12:44:57.394626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.096 [2024-07-15 12:44:57.498442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.096 [2024-07-15 12:44:57.498499] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.096 [2024-07-15 12:44:57.498512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.096 [2024-07-15 12:44:57.498523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.096 [2024-07-15 12:44:57.498533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.096 [2024-07-15 12:44:57.498561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 [2024-07-15 12:44:58.469094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 Malloc0 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 [2024-07-15 12:44:58.534268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3891933 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3891933 /var/tmp/bdevperf.sock 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3891933 ']' 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.666 12:44:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.666 [2024-07-15 12:44:58.585851] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:15:06.666 [2024-07-15 12:44:58.585911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891933 ] 00:15:06.925 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.925 [2024-07-15 12:44:58.667617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.925 [2024-07-15 12:44:58.765170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.183 12:44:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.183 12:44:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:07.183 12:44:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:07.183 12:44:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.183 12:44:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:07.441 NVMe0n1 00:15:07.441 12:44:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.442 12:44:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.700 Running I/O for 10 seconds... 00:15:17.672 00:15:17.672 Latency(us) 00:15:17.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.672 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:17.672 Verification LBA range: start 0x0 length 0x4000 00:15:17.672 NVMe0n1 : 10.08 6604.40 25.80 0.00 0.00 154202.92 6583.39 92941.96 00:15:17.672 =================================================================================================================== 00:15:17.673 Total : 6604.40 25.80 0.00 0.00 154202.92 6583.39 92941.96 00:15:17.673 0 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3891933 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3891933 ']' 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3891933 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.673 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891933 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891933' 00:15:17.943 killing process with pid 3891933 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3891933 00:15:17.943 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.943 00:15:17.943 Latency(us) 00:15:17.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.943 =================================================================================================================== 00:15:17.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3891933 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.943 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.943 rmmod nvme_tcp 00:15:17.943 rmmod nvme_fabrics 00:15:17.943 rmmod nvme_keyring 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3891650 ']' 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3891650 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3891650 ']' 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3891650 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891650 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891650' 00:15:18.204 killing process with pid 3891650 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3891650 00:15:18.204 12:45:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3891650 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.462 12:45:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.362 12:45:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.362 00:15:20.362 real 0m20.991s 00:15:20.362 user 0m25.904s 00:15:20.362 sys 0m5.859s 00:15:20.362 12:45:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.362 12:45:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:20.362 ************************************ 00:15:20.362 END TEST nvmf_queue_depth 00:15:20.362 ************************************ 00:15:20.362 12:45:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:20.362 12:45:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.362 12:45:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:20.362 12:45:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.362 12:45:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.621 ************************************ 00:15:20.621 START TEST nvmf_target_multipath 00:15:20.621 ************************************ 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.621 * Looking for test storage... 00:15:20.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.621 12:45:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:27.189 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:27.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:27.189 Found net devices under 0000:af:00.0: cvl_0_0 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:27.189 Found net devices under 0000:af:00.1: cvl_0_1 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.189 12:45:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:27.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:15:27.189 00:15:27.189 --- 10.0.0.2 ping statistics --- 00:15:27.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.189 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:27.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:15:27.189 00:15:27.189 --- 10.0.0.1 ping statistics --- 00:15:27.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.189 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:27.189 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:27.190 only one NIC for nvmf test 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.190 rmmod nvme_tcp 00:15:27.190 rmmod nvme_fabrics 00:15:27.190 rmmod nvme_keyring 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.190 12:45:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.568 00:15:28.568 real 0m8.131s 00:15:28.568 user 0m1.665s 00:15:28.568 sys 0m4.438s 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.568 12:45:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:28.568 ************************************ 00:15:28.568 END TEST nvmf_target_multipath 00:15:28.568 ************************************ 00:15:28.568 12:45:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.568 12:45:20 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:28.568 12:45:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.568 12:45:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.568 12:45:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.827 ************************************ 00:15:28.827 START TEST nvmf_zcopy 00:15:28.827 ************************************ 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:28.827 * Looking for test storage... 00:15:28.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.827 12:45:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.828 12:45:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:35.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:35.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:35.442 Found net devices under 0000:af:00.0: cvl_0_0 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:35.442 Found net devices under 0000:af:00.1: cvl_0_1 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:15:35.442 00:15:35.442 --- 10.0.0.2 ping statistics --- 00:15:35.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.442 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:15:35.442 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:15:35.443 00:15:35.443 --- 10.0.0.1 ping statistics --- 00:15:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.443 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3901700 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3901700 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3901700 ']' 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.443 12:45:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 [2024-07-15 12:45:26.522936] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:15:35.443 [2024-07-15 12:45:26.522996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.443 [2024-07-15 12:45:26.609453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.443 [2024-07-15 12:45:26.712243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.443 [2024-07-15 12:45:26.712294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.443 [2024-07-15 12:45:26.712307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.443 [2024-07-15 12:45:26.712318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.443 [2024-07-15 12:45:26.712332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.443 [2024-07-15 12:45:26.712358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 [2024-07-15 12:45:27.768864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 [2024-07-15 12:45:27.789091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 malloc0 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:36.010 { 00:15:36.010 "params": { 00:15:36.010 "name": "Nvme$subsystem", 00:15:36.010 "trtype": "$TEST_TRANSPORT", 00:15:36.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.010 "adrfam": "ipv4", 00:15:36.010 "trsvcid": "$NVMF_PORT", 00:15:36.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.010 "hdgst": ${hdgst:-false}, 00:15:36.010 "ddgst": ${ddgst:-false} 00:15:36.010 }, 00:15:36.010 "method": "bdev_nvme_attach_controller" 00:15:36.010 } 00:15:36.010 EOF 00:15:36.010 )") 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:36.010 12:45:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:36.010 "params": { 00:15:36.010 "name": "Nvme1", 00:15:36.010 "trtype": "tcp", 00:15:36.010 "traddr": "10.0.0.2", 00:15:36.010 "adrfam": "ipv4", 00:15:36.010 "trsvcid": "4420", 00:15:36.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.010 "hdgst": false, 00:15:36.010 "ddgst": false 00:15:36.010 }, 00:15:36.010 "method": "bdev_nvme_attach_controller" 00:15:36.010 }' 00:15:36.010 [2024-07-15 12:45:27.880246] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:15:36.010 [2024-07-15 12:45:27.880332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901985 ] 00:15:36.010 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.272 [2024-07-15 12:45:27.964451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.272 [2024-07-15 12:45:28.058094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.559 Running I/O for 10 seconds... 00:15:46.583 00:15:46.584 Latency(us) 00:15:46.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:46.584 Verification LBA range: start 0x0 length 0x1000 00:15:46.584 Nvme1n1 : 10.02 4435.78 34.65 0.00 0.00 28773.45 1087.30 37176.79 00:15:46.584 =================================================================================================================== 00:15:46.584 Total : 4435.78 34.65 0.00 0.00 28773.45 1087.30 37176.79 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3903819 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:46.584 { 00:15:46.584 "params": { 00:15:46.584 "name": "Nvme$subsystem", 00:15:46.584 "trtype": "$TEST_TRANSPORT", 00:15:46.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.584 "adrfam": "ipv4", 00:15:46.584 "trsvcid": "$NVMF_PORT", 00:15:46.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.584 "hdgst": ${hdgst:-false}, 00:15:46.584 "ddgst": ${ddgst:-false} 00:15:46.584 }, 00:15:46.584 "method": "bdev_nvme_attach_controller" 00:15:46.584 } 00:15:46.584 EOF 00:15:46.584 )") 00:15:46.584 [2024-07-15 12:45:38.478138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.584 [2024-07-15 12:45:38.478183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:46.584 12:45:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:46.584 "params": { 00:15:46.584 "name": "Nvme1", 00:15:46.584 "trtype": "tcp", 00:15:46.584 "traddr": "10.0.0.2", 00:15:46.584 "adrfam": "ipv4", 00:15:46.584 "trsvcid": "4420", 00:15:46.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.584 "hdgst": false, 00:15:46.584 "ddgst": false 00:15:46.584 }, 00:15:46.584 "method": "bdev_nvme_attach_controller" 00:15:46.584 }' 00:15:46.584 [2024-07-15 12:45:38.490142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.584 [2024-07-15 12:45:38.490169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.584 [2024-07-15 12:45:38.498157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.584 [2024-07-15 12:45:38.498174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.584 [2024-07-15 12:45:38.506182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.584 [2024-07-15 12:45:38.506198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.584 [2024-07-15 12:45:38.518218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.584 [2024-07-15 12:45:38.518234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.530270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.530288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.542293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.542311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.552637] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:15:46.842 [2024-07-15 12:45:38.552744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903819 ] 00:15:46.842 [2024-07-15 12:45:38.554328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.554345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.566368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.566385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.578413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.578435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.590436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.590453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.602467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.602484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 [2024-07-15 12:45:38.614508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.842 [2024-07-15 12:45:38.614525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.842 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.843 [2024-07-15 12:45:38.626549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.626565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.638579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.638596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.650612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.650629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.662648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.662666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.670665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.670683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.671490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.843 [2024-07-15 12:45:38.678692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.678711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.686716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.686735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.694740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.694757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.706779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.706795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.714800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.714817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.722827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.722850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.730851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.730875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.738872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.738888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.750904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.750921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.758930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.758948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.766951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.766968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.843 [2024-07-15 12:45:38.770656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.843 [2024-07-15 12:45:38.774975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.843 [2024-07-15 12:45:38.774993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.783002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.783024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.795040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.795064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.803055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.803073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.811079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.811097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.819103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.819121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.827128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.827146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.839166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.839198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.847188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.847204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.855220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.855240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.863270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.863297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.871286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.871308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.883332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.883354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.891347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.891368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.899367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.899386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.907383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.907399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.915408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.915424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.927448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.927464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.935484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.935502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.943499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.943520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.951520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.951537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.959549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.959565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.971589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.971606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.979611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.979629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.987639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.987658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:38.995656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:38.995673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:39.003682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:39.003707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:39.015718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:39.015736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:39.023742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:39.023760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.102 [2024-07-15 12:45:39.031766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.102 [2024-07-15 12:45:39.031783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.078630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.078658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.087946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.087966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 Running I/O for 5 seconds... 00:15:47.361 [2024-07-15 12:45:39.108058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.108089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.122853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.122882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.139979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.140009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.158300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.158330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.170931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.170961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.185322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.185350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.199400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.199428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.213907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.213936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.228503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.228531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.243002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.243030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.260091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.260120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.273972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.274001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.361 [2024-07-15 12:45:39.288448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.361 [2024-07-15 12:45:39.288476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.302896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.302925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.317451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.317481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.334953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.334983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.348759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.348788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.363464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.363494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.377662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.377691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.391853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.391882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.408907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.408936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.422021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.422049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.434500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.434529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.446993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.447021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.459656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.459684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.472756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.472784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.487484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.487513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.501663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.501693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.515990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.516020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.530706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.530735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.620 [2024-07-15 12:45:39.547952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.620 [2024-07-15 12:45:39.547982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.561606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.561634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.576735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.576765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.591411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.591440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.606054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.606082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.623479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.623507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.637538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.637566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.652199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.652227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.667218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.667246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.681560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.681589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.698414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.698443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.712098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.712126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.726691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.726719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.741062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.741091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.755657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.755685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.769712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.769740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.784433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.784461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.799214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.799243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.879 [2024-07-15 12:45:39.814132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.879 [2024-07-15 12:45:39.814160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.828698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.828727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.845740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.845768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.858893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.858921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.871566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.871594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.884565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.884593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.898913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.898939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.913829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.913856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.928059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.928087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.942784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.942812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.957381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.957408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.971664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.971692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:39.988921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:39.988949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:40.002597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:40.002625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:40.017426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:40.017456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:40.032857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:40.032895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:40.047026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:40.047065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.138 [2024-07-15 12:45:40.065922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.138 [2024-07-15 12:45:40.065954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.084234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.084273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.096313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.096342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.110492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.110520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.124974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.125008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.142360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.142389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.156397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.156426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.170942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.170971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.185107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.185135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.199736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.199765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.216960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.216989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.231091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.231120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.245498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.245526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.260028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.260057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.274579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.274607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.289156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.289183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.303552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.303580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.317934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.317963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.398 [2024-07-15 12:45:40.332701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.398 [2024-07-15 12:45:40.332729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.347425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.347455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.364469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.364497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.378170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.378197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.392681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.392710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.407126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.407159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.421766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.421794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.438845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.438873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.452776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.452804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.467399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.467428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.481698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.481726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.496123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.496151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.510168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.510197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.524572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.524600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.539414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.539442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.553674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.553701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.567953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.567982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.657 [2024-07-15 12:45:40.586686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.657 [2024-07-15 12:45:40.586715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.604979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.916 [2024-07-15 12:45:40.605011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.617347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.916 [2024-07-15 12:45:40.617376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.632604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.916 [2024-07-15 12:45:40.632634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.647215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.916 [2024-07-15 12:45:40.647244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.664753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.916 [2024-07-15 12:45:40.664782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.916 [2024-07-15 12:45:40.682961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.682990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.695929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.695964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.708722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.708750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.721508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.721538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.734224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.734253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.748534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.748562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.763398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.763426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.778108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.778138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.792419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.792449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.810313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.810342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.827653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.827682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.839986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.840015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.917 [2024-07-15 12:45:40.854360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.917 [2024-07-15 12:45:40.854389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.868555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.868583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.885495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.885524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.897991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.898019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.912159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.912187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.926489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.926517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.943713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.943741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.961919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.961947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.980312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.980351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:40.993625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:40.993653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.009026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.009055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.024157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.024185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.041603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.041634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.055647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.055677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.070313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.070341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.084653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.084681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.175 [2024-07-15 12:45:41.099263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.175 [2024-07-15 12:45:41.099292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.116844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.116874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.130941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.130969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.145129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.145156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.160268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.160296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.174808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.174837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.189483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.189512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.204146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.204174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.221335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.221363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.235193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.235221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.249373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.249401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.267025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.267053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.285057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.285085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.297313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.297340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.311632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.311660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.328856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.328885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.347494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.347522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.433 [2024-07-15 12:45:41.365634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.433 [2024-07-15 12:45:41.365661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.377806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.377835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.392414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.392442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.404329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.404356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.422768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.422796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.436463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.436491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.450790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.450819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.464986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.465014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.479374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.479403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.497145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.497174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.514197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.514224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.526722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.526750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.541606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.541635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.556422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.556450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.573686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.573715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.587591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.587619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.602322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.602352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.616895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.616925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.692 [2024-07-15 12:45:41.631437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.692 [2024-07-15 12:45:41.631465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.648949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.648978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.662743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.662770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.676879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.676909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.691143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.691171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.705234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.705271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.722501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.722529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.736404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.736432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.750303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.750331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.764626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.764654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.778999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.779029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.795949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.795978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.809575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.809603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.824075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.824104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.838871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.838899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.853307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.853335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.870465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.870494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.951 [2024-07-15 12:45:41.884468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.951 [2024-07-15 12:45:41.884495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.899029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.899059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.913435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.913463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.928118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.928146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.942730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.942757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.957252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.957288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.971861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.971889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:41.986739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:41.986767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:42.000956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:42.000984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:42.018438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:42.018465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:42.032263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:42.032291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:42.046911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:42.046941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.209 [2024-07-15 12:45:42.061878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.209 [2024-07-15 12:45:42.061909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.210 [2024-07-15 12:45:42.076323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.210 [2024-07-15 12:45:42.076353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.210 [2024-07-15 12:45:42.093498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.210 [2024-07-15 12:45:42.093528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.210 [2024-07-15 12:45:42.107369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.210 [2024-07-15 12:45:42.107398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.210 [2024-07-15 12:45:42.122088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.210 [2024-07-15 12:45:42.122117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.210 [2024-07-15 12:45:42.136492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.210 [2024-07-15 12:45:42.136522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.151177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.151206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.168477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.168506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.182754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.182783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.197525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.197558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.211858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.211887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.229058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.229087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.247543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.247573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.260159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.468 [2024-07-15 12:45:42.260188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.468 [2024-07-15 12:45:42.275510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.275539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.287269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.287297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.301473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.301502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.319152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.319182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.333286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.333315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.347884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.347913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.362614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.362641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.376959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.376988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.469 [2024-07-15 12:45:42.394506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.469 [2024-07-15 12:45:42.394546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.412771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.412800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.425466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.425496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.439627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.439656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.453922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.453951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.472317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.472354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.486187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.486215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.500749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.500777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.515050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.515078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.530199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.530228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.547840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.547869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.562011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.562039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.576243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.576278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.590804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.590832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.605579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.605608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.622841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.622869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.636609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.636637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.651086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.651114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.727 [2024-07-15 12:45:42.665263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.727 [2024-07-15 12:45:42.665291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.679711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.679745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.697591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.697620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.711573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.711602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.726200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.726228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.740494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.740522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.754877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.754905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.772011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.772040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.785848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.785877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.800422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.800450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.814791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.814819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.829579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.829608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.847616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.847644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.860923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.860950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.875539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.875567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.890125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.890153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.904652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.904680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.986 [2024-07-15 12:45:42.921977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.986 [2024-07-15 12:45:42.922006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:42.935766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:42.935795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:42.950094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:42.950121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:42.964679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:42.964714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:42.978951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:42.978979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:42.993805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:42.993833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.008965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.008994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.026154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.026182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.044507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.044535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.063366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.063394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.082751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.082781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.095750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.095779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.108627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.108655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.123247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.123283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.137815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.137843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.155469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.155497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.169516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.169544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.245 [2024-07-15 12:45:43.184422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.245 [2024-07-15 12:45:43.184449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.198752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.198781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.213269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.213297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.230230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.230274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.243374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.243402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.255887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.255921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.270112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.270139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.284592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.284620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.302017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.302046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.318862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.318889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.331012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.331040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.345111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.345138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.359575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.359603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.376517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.376545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.390357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.390384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.404472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.404500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.418712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.418740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.504 [2024-07-15 12:45:43.433383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.504 [2024-07-15 12:45:43.433412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.448132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.448161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.462755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.462783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.477198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.477226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.491638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.491666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.506184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.506212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.523740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.523769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.543089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.543119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.560068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.560098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.573010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.573040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.585869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.585897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.603095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.603125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.617229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.617268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.631782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.631811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.646115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.646144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.660994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.661024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.677838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.677867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.763 [2024-07-15 12:45:43.690911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.763 [2024-07-15 12:45:43.690940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.705199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.705229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.719755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.719783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.734332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.734362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.752011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.752040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.766211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.766241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.780553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.780583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.795006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.795035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.809847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.809875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.824413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.824442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.838983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.839012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.853140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.853168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.867630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.867659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.881894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.881922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.898947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.898975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.912049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.912078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.924725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.924754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.937798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.937827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.022 [2024-07-15 12:45:43.951279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.022 [2024-07-15 12:45:43.951307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.282 [2024-07-15 12:45:43.969129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:43.969158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:43.983115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:43.983142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:43.997404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:43.997432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.011730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.011760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.026357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.026385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.041040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.041068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.058395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.058423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.072445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.072473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.086786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.086813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.101457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.101486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.114136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.114164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 00:15:52.283 Latency(us) 00:15:52.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.283 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:52.283 Nvme1n1 : 5.01 8722.23 68.14 0.00 0.00 14654.39 6404.65 26691.03 00:15:52.283 =================================================================================================================== 00:15:52.283 Total : 8722.23 68.14 0.00 0.00 14654.39 6404.65 26691.03 00:15:52.283 [2024-07-15 12:45:44.121342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.121366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.129360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.129383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.137379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.137396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.145408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.145430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.157450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.157474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.165461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.165480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.173486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.173507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.181509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.181529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.189532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.189552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.201574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.201595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.283 [2024-07-15 12:45:44.213610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.283 [2024-07-15 12:45:44.213630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.542 [2024-07-15 12:45:44.225643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.542 [2024-07-15 12:45:44.225665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.542 [2024-07-15 12:45:44.233663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.233679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.241685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.241708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.253725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.253746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.261743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.261761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.269764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.269780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.277791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.277807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.285815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.285833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.297853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.297872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.309889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.309905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 [2024-07-15 12:45:44.321924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.543 [2024-07-15 12:45:44.321942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3903819) - No such process 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3903819 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.543 delay0 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.543 12:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:52.543 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.801 [2024-07-15 12:45:44.523422] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:59.371 Initializing NVMe Controllers 00:15:59.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:59.371 Initialization complete. Launching workers. 00:15:59.371 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 92 00:15:59.371 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 379, failed to submit 33 00:15:59.371 success 194, unsuccess 185, failed 0 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.371 rmmod nvme_tcp 00:15:59.371 rmmod nvme_fabrics 00:15:59.371 rmmod nvme_keyring 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3901700 ']' 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3901700 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3901700 ']' 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3901700 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3901700 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3901700' 00:15:59.371 killing process with pid 3901700 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3901700 00:15:59.371 12:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3901700 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.371 12:45:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.278 12:45:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.278 00:16:01.278 real 0m32.595s 00:16:01.278 user 0m44.782s 00:16:01.278 sys 0m10.102s 00:16:01.278 12:45:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.278 12:45:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.278 ************************************ 00:16:01.278 END TEST nvmf_zcopy 00:16:01.278 ************************************ 00:16:01.278 12:45:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.278 12:45:53 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:01.278 12:45:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.278 12:45:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.278 12:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.278 ************************************ 00:16:01.278 START TEST nvmf_nmic 00:16:01.278 ************************************ 00:16:01.278 12:45:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:01.537 * Looking for test storage... 00:16:01.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.537 12:45:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.101 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:08.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:08.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:08.102 Found net devices under 0000:af:00.0: cvl_0_0 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:08.102 Found net devices under 0000:af:00.1: cvl_0_1 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.102 12:45:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:16:08.102 00:16:08.102 --- 10.0.0.2 ping statistics --- 00:16:08.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.102 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:16:08.102 00:16:08.102 --- 10.0.0.1 ping statistics --- 00:16:08.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.102 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3909623 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3909623 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3909623 ']' 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.102 12:45:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.102 [2024-07-15 12:45:59.230667] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:16:08.102 [2024-07-15 12:45:59.230729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.102 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.102 [2024-07-15 12:45:59.317344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.102 [2024-07-15 12:45:59.410843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.102 [2024-07-15 12:45:59.410881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.102 [2024-07-15 12:45:59.410892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.102 [2024-07-15 12:45:59.410901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.102 [2024-07-15 12:45:59.410908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.102 [2024-07-15 12:45:59.410957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.102 [2024-07-15 12:45:59.411093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.102 [2024-07-15 12:45:59.411124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.102 [2024-07-15 12:45:59.411124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 [2024-07-15 12:46:00.235575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 Malloc0 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 [2024-07-15 12:46:00.295265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:08.360 test case1: single bdev can't be used in multiple subsystems 00:16:08.360 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.618 [2024-07-15 12:46:00.319175] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:08.618 [2024-07-15 12:46:00.319198] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:08.618 [2024-07-15 12:46:00.319208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.618 request: 00:16:08.618 { 00:16:08.618 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:08.618 "namespace": { 00:16:08.618 "bdev_name": "Malloc0", 00:16:08.618 "no_auto_visible": false 00:16:08.618 }, 00:16:08.618 "method": "nvmf_subsystem_add_ns", 00:16:08.618 "req_id": 1 00:16:08.618 } 00:16:08.618 Got JSON-RPC error response 00:16:08.618 response: 00:16:08.618 { 00:16:08.618 "code": -32602, 00:16:08.618 "message": "Invalid parameters" 00:16:08.618 } 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:08.618 Adding namespace failed - expected result. 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:08.618 test case2: host connect to nvmf target in multiple paths 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.618 [2024-07-15 12:46:00.331331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.618 12:46:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.991 12:46:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:11.375 12:46:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.375 12:46:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.375 12:46:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.375 12:46:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.375 12:46:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:13.292 12:46:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:13.292 [global] 00:16:13.292 thread=1 00:16:13.292 invalidate=1 00:16:13.292 rw=write 00:16:13.292 time_based=1 00:16:13.292 runtime=1 00:16:13.292 ioengine=libaio 00:16:13.292 direct=1 00:16:13.292 bs=4096 00:16:13.292 iodepth=1 00:16:13.292 norandommap=0 00:16:13.292 numjobs=1 00:16:13.292 00:16:13.292 verify_dump=1 00:16:13.292 verify_backlog=512 00:16:13.292 verify_state_save=0 00:16:13.292 do_verify=1 00:16:13.292 verify=crc32c-intel 00:16:13.292 [job0] 00:16:13.292 filename=/dev/nvme0n1 00:16:13.292 Could not set queue depth (nvme0n1) 00:16:13.549 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.549 fio-3.35 00:16:13.549 Starting 1 thread 00:16:14.925 00:16:14.925 job0: (groupid=0, jobs=1): err= 0: pid=3910845: Mon Jul 15 12:46:06 2024 00:16:14.925 read: IOPS=486, BW=1946KiB/s (1993kB/s)(2024KiB/1040msec) 00:16:14.925 slat (nsec): min=8048, max=43232, avg=9377.10, stdev=3211.79 00:16:14.925 clat (usec): min=316, max=41062, avg=1816.06, stdev=7523.62 00:16:14.925 lat (usec): min=325, max=41083, avg=1825.44, stdev=7525.61 00:16:14.925 clat percentiles (usec): 00:16:14.925 | 1.00th=[ 326], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:16:14.925 | 30.00th=[ 359], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:16:14.925 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 441], 95.00th=[ 510], 00:16:14.925 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:14.925 | 99.99th=[41157] 00:16:14.925 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:14.925 slat (nsec): min=11059, max=40062, avg=12429.60, stdev=2329.25 00:16:14.925 clat (usec): min=168, max=394, avg=206.87, stdev=17.72 00:16:14.925 lat (usec): min=180, max=429, avg=219.30, stdev=18.16 00:16:14.925 clat percentiles (usec): 00:16:14.925 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:16:14.925 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:16:14.925 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 239], 00:16:14.925 | 99.00th=[ 258], 99.50th=[ 293], 99.90th=[ 396], 99.95th=[ 396], 00:16:14.925 | 99.99th=[ 396] 00:16:14.925 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:14.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:14.925 lat (usec) : 250=49.41%, 500=47.45%, 750=1.38% 00:16:14.925 lat (msec) : 50=1.77% 00:16:14.925 cpu : usr=1.25%, sys=1.35%, ctx=1018, majf=0, minf=2 00:16:14.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.925 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.925 00:16:14.925 Run status group 0 (all jobs): 00:16:14.925 READ: bw=1946KiB/s (1993kB/s), 1946KiB/s-1946KiB/s (1993kB/s-1993kB/s), io=2024KiB (2073kB), run=1040-1040msec 00:16:14.925 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:16:14.925 00:16:14.925 Disk stats (read/write): 00:16:14.925 nvme0n1: ios=397/512, merge=0/0, ticks=824/98, in_queue=922, util=92.69% 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:14.925 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.926 rmmod nvme_tcp 00:16:14.926 rmmod nvme_fabrics 00:16:14.926 rmmod nvme_keyring 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3909623 ']' 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3909623 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3909623 ']' 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3909623 00:16:14.926 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3909623 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3909623' 00:16:15.184 killing process with pid 3909623 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3909623 00:16:15.184 12:46:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3909623 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.443 12:46:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.344 12:46:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.344 00:16:17.344 real 0m16.035s 00:16:17.344 user 0m44.580s 00:16:17.344 sys 0m5.334s 00:16:17.344 12:46:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.344 12:46:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.344 ************************************ 00:16:17.344 END TEST nvmf_nmic 00:16:17.344 ************************************ 00:16:17.344 12:46:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.344 12:46:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.344 12:46:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.344 12:46:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.344 12:46:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.603 ************************************ 00:16:17.603 START TEST nvmf_fio_target 00:16:17.603 ************************************ 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.603 * Looking for test storage... 00:16:17.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.603 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.604 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.604 12:46:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.604 12:46:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.930 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:22.931 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:22.931 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:22.931 Found net devices under 0000:af:00.0: cvl_0_0 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:22.931 Found net devices under 0000:af:00.1: cvl_0_1 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.931 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:16:23.190 00:16:23.190 --- 10.0.0.2 ping statistics --- 00:16:23.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.190 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:16:23.190 00:16:23.190 --- 10.0.0.1 ping statistics --- 00:16:23.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.190 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3914674 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3914674 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3914674 ']' 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.190 12:46:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.190 [2024-07-15 12:46:15.034969] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:16:23.190 [2024-07-15 12:46:15.035028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.190 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.190 [2024-07-15 12:46:15.122631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.449 [2024-07-15 12:46:15.213958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.449 [2024-07-15 12:46:15.214002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.449 [2024-07-15 12:46:15.214013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.449 [2024-07-15 12:46:15.214022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.449 [2024-07-15 12:46:15.214030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.449 [2024-07-15 12:46:15.214082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.449 [2024-07-15 12:46:15.214193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.449 [2024-07-15 12:46:15.214303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.449 [2024-07-15 12:46:15.214304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.385 12:46:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.385 12:46:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.385 12:46:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.385 12:46:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.385 12:46:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 12:46:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.385 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.385 [2024-07-15 12:46:16.253500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.385 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.643 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:24.643 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.210 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:25.210 12:46:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.210 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:25.210 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.468 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:25.468 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:25.728 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.987 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:25.987 12:46:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.245 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:26.245 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.503 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:26.503 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:26.761 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.019 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:27.019 12:46:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.277 12:46:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:27.277 12:46:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.536 12:46:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.794 [2024-07-15 12:46:19.565953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.794 12:46:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:28.053 12:46:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:28.311 12:46:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:29.689 12:46:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:31.592 12:46:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:31.592 [global] 00:16:31.592 thread=1 00:16:31.592 invalidate=1 00:16:31.592 rw=write 00:16:31.592 time_based=1 00:16:31.592 runtime=1 00:16:31.592 ioengine=libaio 00:16:31.592 direct=1 00:16:31.592 bs=4096 00:16:31.592 iodepth=1 00:16:31.592 norandommap=0 00:16:31.592 numjobs=1 00:16:31.592 00:16:31.592 verify_dump=1 00:16:31.592 verify_backlog=512 00:16:31.592 verify_state_save=0 00:16:31.592 do_verify=1 00:16:31.592 verify=crc32c-intel 00:16:31.592 [job0] 00:16:31.592 filename=/dev/nvme0n1 00:16:31.592 [job1] 00:16:31.592 filename=/dev/nvme0n2 00:16:31.592 [job2] 00:16:31.592 filename=/dev/nvme0n3 00:16:31.592 [job3] 00:16:31.592 filename=/dev/nvme0n4 00:16:31.592 Could not set queue depth (nvme0n1) 00:16:31.592 Could not set queue depth (nvme0n2) 00:16:31.592 Could not set queue depth (nvme0n3) 00:16:31.592 Could not set queue depth (nvme0n4) 00:16:32.159 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.159 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.159 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.159 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.159 fio-3.35 00:16:32.159 Starting 4 threads 00:16:33.095 00:16:33.095 job0: (groupid=0, jobs=1): err= 0: pid=3916422: Mon Jul 15 12:46:25 2024 00:16:33.095 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:33.095 slat (nsec): min=7801, max=38214, avg=9197.34, stdev=1715.68 00:16:33.095 clat (usec): min=358, max=725, avg=442.39, stdev=35.05 00:16:33.095 lat (usec): min=366, max=734, avg=451.59, stdev=35.07 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:16:33.095 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 449], 00:16:33.095 | 70.00th=[ 457], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 486], 00:16:33.095 | 99.00th=[ 586], 99.50th=[ 627], 99.90th=[ 717], 99.95th=[ 725], 00:16:33.095 | 99.99th=[ 725] 00:16:33.095 write: IOPS=1411, BW=5646KiB/s (5782kB/s)(5652KiB/1001msec); 0 zone resets 00:16:33.095 slat (nsec): min=8980, max=43411, avg=12513.25, stdev=2199.10 00:16:33.095 clat (usec): min=257, max=2260, avg=362.46, stdev=120.49 00:16:33.095 lat (usec): min=269, max=2304, avg=374.97, stdev=120.99 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 302], 00:16:33.095 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 334], 00:16:33.095 | 70.00th=[ 343], 80.00th=[ 457], 90.00th=[ 515], 95.00th=[ 529], 00:16:33.095 | 99.00th=[ 725], 99.50th=[ 1106], 99.90th=[ 1631], 99.95th=[ 2245], 00:16:33.095 | 99.99th=[ 2245] 00:16:33.095 bw ( KiB/s): min= 4976, max= 4976, per=31.75%, avg=4976.00, stdev= 0.00, samples=1 00:16:33.095 iops : min= 1244, max= 1244, avg=1244.00, stdev= 0.00, samples=1 00:16:33.095 lat (usec) : 500=90.32%, 750=9.11%, 1000=0.25% 00:16:33.095 lat (msec) : 2=0.29%, 4=0.04% 00:16:33.095 cpu : usr=1.40%, sys=4.90%, ctx=2437, majf=0, minf=2 00:16:33.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 issued rwts: total=1024,1413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.095 job1: (groupid=0, jobs=1): err= 0: pid=3916434: Mon Jul 15 12:46:25 2024 00:16:33.095 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:16:33.095 slat (nsec): min=11171, max=26269, avg=22158.52, stdev=2661.32 00:16:33.095 clat (usec): min=40684, max=41972, avg=41002.42, stdev=232.92 00:16:33.095 lat (usec): min=40695, max=41995, avg=41024.58, stdev=233.84 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:33.095 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:33.095 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:33.095 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:33.095 | 99.99th=[42206] 00:16:33.095 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:16:33.095 slat (nsec): min=9128, max=40765, avg=11996.46, stdev=2843.82 00:16:33.095 clat (usec): min=194, max=1220, avg=282.79, stdev=57.17 00:16:33.095 lat (usec): min=206, max=1234, avg=294.78, stdev=57.88 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:16:33.095 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:16:33.095 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 347], 00:16:33.095 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 1221], 99.95th=[ 1221], 00:16:33.095 | 99.99th=[ 1221] 00:16:33.095 bw ( KiB/s): min= 4096, max= 4096, per=26.13%, avg=4096.00, stdev= 0.00, samples=1 00:16:33.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:33.095 lat (usec) : 250=25.89%, 500=69.98% 00:16:33.095 lat (msec) : 2=0.19%, 50=3.94% 00:16:33.095 cpu : usr=0.30%, sys=0.89%, ctx=535, majf=0, minf=1 00:16:33.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.095 job2: (groupid=0, jobs=1): err= 0: pid=3916455: Mon Jul 15 12:46:25 2024 00:16:33.095 read: IOPS=18, BW=75.9KiB/s (77.7kB/s)(76.0KiB/1001msec) 00:16:33.095 slat (nsec): min=9605, max=23667, avg=22383.68, stdev=3106.53 00:16:33.095 clat (usec): min=40879, max=42036, avg=41223.16, stdev=476.26 00:16:33.095 lat (usec): min=40889, max=42058, avg=41245.55, stdev=476.76 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:33.095 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:33.095 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:33.095 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:33.095 | 99.99th=[42206] 00:16:33.095 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:33.095 slat (nsec): min=9464, max=38994, avg=12210.24, stdev=4518.13 00:16:33.095 clat (usec): min=223, max=2749, avg=409.42, stdev=167.38 00:16:33.095 lat (usec): min=234, max=2760, avg=421.63, stdev=167.90 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 269], 00:16:33.095 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 453], 60.00th=[ 494], 00:16:33.095 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 553], 00:16:33.095 | 99.00th=[ 783], 99.50th=[ 840], 99.90th=[ 2737], 99.95th=[ 2737], 00:16:33.095 | 99.99th=[ 2737] 00:16:33.095 bw ( KiB/s): min= 4096, max= 4096, per=26.13%, avg=4096.00, stdev= 0.00, samples=1 00:16:33.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:33.095 lat (usec) : 250=10.55%, 500=50.09%, 750=34.65%, 1000=0.94% 00:16:33.095 lat (msec) : 4=0.19%, 50=3.58% 00:16:33.095 cpu : usr=0.20%, sys=0.70%, ctx=532, majf=0, minf=1 00:16:33.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.095 job3: (groupid=0, jobs=1): err= 0: pid=3916460: Mon Jul 15 12:46:25 2024 00:16:33.095 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:16:33.095 slat (nsec): min=5232, max=24241, avg=8373.78, stdev=1142.78 00:16:33.095 clat (usec): min=287, max=3467, avg=369.36, stdev=91.48 00:16:33.095 lat (usec): min=295, max=3482, avg=377.73, stdev=91.77 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:16:33.095 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 367], 00:16:33.095 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 474], 00:16:33.095 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 3458], 00:16:33.095 | 99.99th=[ 3458] 00:16:33.095 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:33.095 slat (nsec): min=10663, max=46119, avg=12591.54, stdev=2143.18 00:16:33.095 clat (usec): min=204, max=1565, avg=285.99, stdev=55.77 00:16:33.095 lat (usec): min=216, max=1581, avg=298.58, stdev=55.99 00:16:33.095 clat percentiles (usec): 00:16:33.095 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 245], 00:16:33.095 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 289], 00:16:33.095 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 367], 00:16:33.095 | 99.00th=[ 396], 99.50th=[ 445], 99.90th=[ 465], 99.95th=[ 1565], 00:16:33.095 | 99.99th=[ 1565] 00:16:33.095 bw ( KiB/s): min= 7280, max= 7280, per=46.45%, avg=7280.00, stdev= 0.00, samples=1 00:16:33.095 iops : min= 1820, max= 1820, avg=1820.00, stdev= 0.00, samples=1 00:16:33.095 lat (usec) : 250=13.79%, 500=84.95%, 750=1.19% 00:16:33.095 lat (msec) : 2=0.03%, 4=0.03% 00:16:33.095 cpu : usr=2.40%, sys=5.10%, ctx=2945, majf=0, minf=1 00:16:33.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.095 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.095 00:16:33.095 Run status group 0 (all jobs): 00:16:33.095 READ: bw=9751KiB/s (9986kB/s), 75.9KiB/s-5626KiB/s (77.7kB/s-5761kB/s), io=9888KiB (10.1MB), run=1001-1014msec 00:16:33.095 WRITE: bw=15.3MiB/s (16.0MB/s), 2020KiB/s-6138KiB/s (2068kB/s-6285kB/s), io=15.5MiB (16.3MB), run=1001-1014msec 00:16:33.095 00:16:33.095 Disk stats (read/write): 00:16:33.095 nvme0n1: ios=1039/1024, merge=0/0, ticks=445/380, in_queue=825, util=87.07% 00:16:33.095 nvme0n2: ios=67/512, merge=0/0, ticks=932/138, in_queue=1070, util=98.68% 00:16:33.095 nvme0n3: ios=43/512, merge=0/0, ticks=1577/210, in_queue=1787, util=98.43% 00:16:33.095 nvme0n4: ios=1048/1508, merge=0/0, ticks=1358/418, in_queue=1776, util=98.53% 00:16:33.095 12:46:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:33.354 [global] 00:16:33.354 thread=1 00:16:33.354 invalidate=1 00:16:33.354 rw=randwrite 00:16:33.354 time_based=1 00:16:33.354 runtime=1 00:16:33.354 ioengine=libaio 00:16:33.354 direct=1 00:16:33.354 bs=4096 00:16:33.354 iodepth=1 00:16:33.354 norandommap=0 00:16:33.354 numjobs=1 00:16:33.354 00:16:33.354 verify_dump=1 00:16:33.354 verify_backlog=512 00:16:33.354 verify_state_save=0 00:16:33.354 do_verify=1 00:16:33.354 verify=crc32c-intel 00:16:33.354 [job0] 00:16:33.354 filename=/dev/nvme0n1 00:16:33.354 [job1] 00:16:33.354 filename=/dev/nvme0n2 00:16:33.354 [job2] 00:16:33.354 filename=/dev/nvme0n3 00:16:33.354 [job3] 00:16:33.354 filename=/dev/nvme0n4 00:16:33.354 Could not set queue depth (nvme0n1) 00:16:33.354 Could not set queue depth (nvme0n2) 00:16:33.354 Could not set queue depth (nvme0n3) 00:16:33.354 Could not set queue depth (nvme0n4) 00:16:33.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.612 fio-3.35 00:16:33.612 Starting 4 threads 00:16:35.004 00:16:35.004 job0: (groupid=0, jobs=1): err= 0: pid=3916871: Mon Jul 15 12:46:26 2024 00:16:35.004 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:16:35.004 slat (nsec): min=9580, max=23590, avg=21046.73, stdev=3665.64 00:16:35.004 clat (usec): min=40487, max=41225, avg=40958.44, stdev=138.52 00:16:35.004 lat (usec): min=40497, max=41235, avg=40979.49, stdev=139.34 00:16:35.004 clat percentiles (usec): 00:16:35.004 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:35.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:35.004 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:35.004 | 99.99th=[41157] 00:16:35.004 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:35.004 slat (nsec): min=10485, max=45095, avg=12289.66, stdev=2754.57 00:16:35.004 clat (usec): min=196, max=398, avg=240.93, stdev=19.87 00:16:35.004 lat (usec): min=207, max=439, avg=253.22, stdev=20.62 00:16:35.004 clat percentiles (usec): 00:16:35.004 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:16:35.004 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:16:35.004 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:16:35.004 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 400], 99.95th=[ 400], 00:16:35.004 | 99.99th=[ 400] 00:16:35.004 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.004 lat (usec) : 250=69.66%, 500=26.22% 00:16:35.004 lat (msec) : 50=4.12% 00:16:35.004 cpu : usr=0.29%, sys=1.07%, ctx=536, majf=0, minf=2 00:16:35.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.004 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.004 job1: (groupid=0, jobs=1): err= 0: pid=3916882: Mon Jul 15 12:46:26 2024 00:16:35.004 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:16:35.004 slat (nsec): min=9026, max=22675, avg=21572.10, stdev=2963.22 00:16:35.004 clat (usec): min=40759, max=42033, avg=41036.62, stdev=257.34 00:16:35.004 lat (usec): min=40781, max=42055, avg=41058.19, stdev=256.72 00:16:35.004 clat percentiles (usec): 00:16:35.004 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:35.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:35.004 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:35.004 | 99.99th=[42206] 00:16:35.004 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:16:35.004 slat (nsec): min=8424, max=36593, avg=10006.34, stdev=1740.60 00:16:35.004 clat (usec): min=205, max=598, avg=341.40, stdev=25.17 00:16:35.004 lat (usec): min=215, max=635, avg=351.40, stdev=25.74 00:16:35.004 clat percentiles (usec): 00:16:35.004 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:16:35.004 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:16:35.004 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:16:35.004 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 603], 99.95th=[ 603], 00:16:35.004 | 99.99th=[ 603] 00:16:35.004 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.004 lat (usec) : 250=0.19%, 500=95.86%, 750=0.19% 00:16:35.004 lat (msec) : 50=3.76% 00:16:35.004 cpu : usr=0.10%, sys=0.70%, ctx=532, majf=0, minf=1 00:16:35.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.004 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.004 job2: (groupid=0, jobs=1): err= 0: pid=3916899: Mon Jul 15 12:46:26 2024 00:16:35.004 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:16:35.004 slat (nsec): min=9953, max=25108, avg=22468.40, stdev=3038.82 00:16:35.004 clat (usec): min=40837, max=41805, avg=41005.54, stdev=197.37 00:16:35.004 lat (usec): min=40847, max=41828, avg=41028.01, stdev=198.02 00:16:35.004 clat percentiles (usec): 00:16:35.004 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:35.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:35.004 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:35.004 | 99.99th=[41681] 00:16:35.004 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:16:35.004 slat (nsec): min=11097, max=37616, avg=13380.90, stdev=2210.82 00:16:35.005 clat (usec): min=263, max=530, avg=364.73, stdev=33.95 00:16:35.005 lat (usec): min=274, max=543, avg=378.11, stdev=34.57 00:16:35.005 clat percentiles (usec): 00:16:35.005 | 1.00th=[ 277], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 343], 00:16:35.005 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:16:35.005 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 416], 00:16:35.005 | 99.00th=[ 469], 99.50th=[ 506], 99.90th=[ 529], 99.95th=[ 529], 00:16:35.005 | 99.99th=[ 529] 00:16:35.005 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.005 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.005 lat (usec) : 500=95.68%, 750=0.56% 00:16:35.005 lat (msec) : 50=3.76% 00:16:35.005 cpu : usr=0.39%, sys=1.08%, ctx=533, majf=0, minf=1 00:16:35.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.005 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.005 job3: (groupid=0, jobs=1): err= 0: pid=3916904: Mon Jul 15 12:46:26 2024 00:16:35.005 read: IOPS=19, BW=79.8KiB/s (81.8kB/s)(80.0KiB/1002msec) 00:16:35.005 slat (nsec): min=8735, max=22766, avg=21546.85, stdev=3029.64 00:16:35.005 clat (usec): min=40830, max=42073, avg=41026.58, stdev=268.30 00:16:35.005 lat (usec): min=40852, max=42095, avg=41048.13, stdev=268.51 00:16:35.005 clat percentiles (usec): 00:16:35.005 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:16:35.005 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.005 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:35.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:35.005 | 99.99th=[42206] 00:16:35.005 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:35.005 slat (nsec): min=8754, max=38194, avg=9844.59, stdev=1551.44 00:16:35.005 clat (usec): min=276, max=612, avg=341.88, stdev=25.13 00:16:35.005 lat (usec): min=286, max=651, avg=351.72, stdev=25.74 00:16:35.005 clat percentiles (usec): 00:16:35.005 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:16:35.005 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:16:35.005 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 375], 00:16:35.005 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 611], 99.95th=[ 611], 00:16:35.005 | 99.99th=[ 611] 00:16:35.005 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.005 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.005 lat (usec) : 500=96.05%, 750=0.19% 00:16:35.005 lat (msec) : 50=3.76% 00:16:35.005 cpu : usr=0.20%, sys=0.60%, ctx=532, majf=0, minf=1 00:16:35.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.005 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.005 00:16:35.005 Run status group 0 (all jobs): 00:16:35.005 READ: bw=318KiB/s (325kB/s), 78.7KiB/s-85.2KiB/s (80.6kB/s-87.2kB/s), io=328KiB (336kB), run=1002-1033msec 00:16:35.005 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2044KiB/s (2030kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1033msec 00:16:35.005 00:16:35.005 Disk stats (read/write): 00:16:35.005 nvme0n1: ios=49/512, merge=0/0, ticks=1578/122, in_queue=1700, util=99.50% 00:16:35.005 nvme0n2: ios=42/512, merge=0/0, ticks=692/177, in_queue=869, util=87.70% 00:16:35.005 nvme0n3: ios=54/512, merge=0/0, ticks=1495/183, in_queue=1678, util=99.58% 00:16:35.005 nvme0n4: ios=16/512, merge=0/0, ticks=656/173, in_queue=829, util=89.61% 00:16:35.005 12:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:35.005 [global] 00:16:35.005 thread=1 00:16:35.005 invalidate=1 00:16:35.005 rw=write 00:16:35.005 time_based=1 00:16:35.005 runtime=1 00:16:35.005 ioengine=libaio 00:16:35.005 direct=1 00:16:35.005 bs=4096 00:16:35.005 iodepth=128 00:16:35.005 norandommap=0 00:16:35.005 numjobs=1 00:16:35.005 00:16:35.005 verify_dump=1 00:16:35.005 verify_backlog=512 00:16:35.005 verify_state_save=0 00:16:35.005 do_verify=1 00:16:35.005 verify=crc32c-intel 00:16:35.005 [job0] 00:16:35.005 filename=/dev/nvme0n1 00:16:35.005 [job1] 00:16:35.005 filename=/dev/nvme0n2 00:16:35.005 [job2] 00:16:35.005 filename=/dev/nvme0n3 00:16:35.005 [job3] 00:16:35.005 filename=/dev/nvme0n4 00:16:35.005 Could not set queue depth (nvme0n1) 00:16:35.005 Could not set queue depth (nvme0n2) 00:16:35.005 Could not set queue depth (nvme0n3) 00:16:35.005 Could not set queue depth (nvme0n4) 00:16:35.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.275 fio-3.35 00:16:35.275 Starting 4 threads 00:16:36.653 00:16:36.653 job0: (groupid=0, jobs=1): err= 0: pid=3917348: Mon Jul 15 12:46:28 2024 00:16:36.653 read: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec) 00:16:36.653 slat (usec): min=2, max=24255, avg=230.75, stdev=1666.46 00:16:36.653 clat (usec): min=9639, max=53356, avg=28407.29, stdev=6862.66 00:16:36.653 lat (usec): min=9650, max=53365, avg=28638.04, stdev=6990.67 00:16:36.653 clat percentiles (usec): 00:16:36.653 | 1.00th=[11469], 5.00th=[21365], 10.00th=[23725], 20.00th=[25035], 00:16:36.653 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25822], 60.00th=[26870], 00:16:36.653 | 70.00th=[29230], 80.00th=[32900], 90.00th=[39060], 95.00th=[43779], 00:16:36.653 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:16:36.653 | 99.99th=[53216] 00:16:36.653 write: IOPS=1976, BW=7906KiB/s (8096kB/s)(8088KiB/1023msec); 0 zone resets 00:16:36.653 slat (usec): min=3, max=25452, avg=312.55, stdev=1686.26 00:16:36.653 clat (usec): min=1545, max=157552, avg=40996.53, stdev=36700.36 00:16:36.653 lat (usec): min=1559, max=157585, avg=41309.08, stdev=36963.90 00:16:36.653 clat percentiles (msec): 00:16:36.653 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 24], 00:16:36.653 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 27], 00:16:36.653 | 70.00th=[ 28], 80.00th=[ 39], 90.00th=[ 112], 95.00th=[ 140], 00:16:36.653 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:16:36.653 | 99.99th=[ 159] 00:16:36.653 bw ( KiB/s): min= 4744, max=10416, per=19.47%, avg=7580.00, stdev=4010.71, samples=2 00:16:36.653 iops : min= 1186, max= 2604, avg=1895.00, stdev=1002.68, samples=2 00:16:36.653 lat (msec) : 2=0.06%, 10=1.26%, 20=5.99%, 50=81.96%, 100=4.08% 00:16:36.653 lat (msec) : 250=6.66% 00:16:36.653 cpu : usr=1.86%, sys=2.94%, ctx=228, majf=0, minf=1 00:16:36.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:16:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.653 issued rwts: total=1536,2022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.653 job1: (groupid=0, jobs=1): err= 0: pid=3917354: Mon Jul 15 12:46:28 2024 00:16:36.653 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:16:36.653 slat (usec): min=2, max=52541, avg=306.12, stdev=2405.05 00:16:36.653 clat (msec): min=9, max=121, avg=38.68, stdev=19.22 00:16:36.653 lat (msec): min=9, max=121, avg=38.99, stdev=19.39 00:16:36.653 clat percentiles (msec): 00:16:36.653 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 26], 00:16:36.653 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 36], 00:16:36.653 | 70.00th=[ 40], 80.00th=[ 54], 90.00th=[ 69], 95.00th=[ 75], 00:16:36.653 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 100], 99.95th=[ 122], 00:16:36.653 | 99.99th=[ 122] 00:16:36.653 write: IOPS=1681, BW=6726KiB/s (6888kB/s)(6780KiB/1008msec); 0 zone resets 00:16:36.653 slat (usec): min=4, max=26443, avg=306.27, stdev=1969.17 00:16:36.653 clat (usec): min=1215, max=196708, avg=38393.43, stdev=39960.76 00:16:36.653 lat (msec): min=7, max=196, avg=38.70, stdev=40.19 00:16:36.653 clat percentiles (msec): 00:16:36.653 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:16:36.653 | 30.00th=[ 18], 40.00th=[ 26], 50.00th=[ 30], 60.00th=[ 31], 00:16:36.653 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 75], 95.00th=[ 161], 00:16:36.653 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 197], 99.95th=[ 197], 00:16:36.653 | 99.99th=[ 197] 00:16:36.653 bw ( KiB/s): min= 5424, max= 7120, per=16.11%, avg=6272.00, stdev=1199.25, samples=2 00:16:36.653 iops : min= 1356, max= 1780, avg=1568.00, stdev=299.81, samples=2 00:16:36.653 lat (msec) : 2=0.03%, 10=1.61%, 20=26.28%, 50=54.10%, 100=13.53% 00:16:36.653 lat (msec) : 250=4.46% 00:16:36.653 cpu : usr=1.29%, sys=1.89%, ctx=157, majf=0, minf=1 00:16:36.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.653 issued rwts: total=1536,1695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.653 job2: (groupid=0, jobs=1): err= 0: pid=3917372: Mon Jul 15 12:46:28 2024 00:16:36.653 read: IOPS=2190, BW=8763KiB/s (8974kB/s)(9000KiB/1027msec) 00:16:36.653 slat (nsec): min=1674, max=23744k, avg=193370.12, stdev=1458237.59 00:16:36.653 clat (usec): min=7262, max=61066, avg=23985.76, stdev=7499.82 00:16:36.653 lat (usec): min=7269, max=61091, avg=24179.13, stdev=7597.83 00:16:36.653 clat percentiles (usec): 00:16:36.653 | 1.00th=[10028], 5.00th=[14222], 10.00th=[18482], 20.00th=[19530], 00:16:36.653 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21627], 60.00th=[22152], 00:16:36.653 | 70.00th=[24249], 80.00th=[30278], 90.00th=[35390], 95.00th=[39060], 00:16:36.653 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[53740], 00:16:36.653 | 99.99th=[61080] 00:16:36.653 write: IOPS=2492, BW=9971KiB/s (10.2MB/s)(10.0MiB/1027msec); 0 zone resets 00:16:36.653 slat (usec): min=2, max=60233, avg=207.43, stdev=1927.65 00:16:36.653 clat (msec): min=5, max=114, avg=24.98, stdev=12.64 00:16:36.653 lat (msec): min=5, max=114, avg=25.19, stdev=12.85 00:16:36.653 clat percentiles (msec): 00:16:36.653 | 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 20], 00:16:36.653 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 21], 60.00th=[ 22], 00:16:36.653 | 70.00th=[ 27], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 45], 00:16:36.653 | 99.00th=[ 87], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:16:36.653 | 99.99th=[ 114] 00:16:36.653 bw ( KiB/s): min= 8432, max=11864, per=26.07%, avg=10148.00, stdev=2426.79, samples=2 00:16:36.653 iops : min= 2108, max= 2966, avg=2537.00, stdev=606.70, samples=2 00:16:36.653 lat (msec) : 10=2.18%, 20=27.63%, 50=67.71%, 100=2.16%, 250=0.31% 00:16:36.653 cpu : usr=1.95%, sys=2.63%, ctx=255, majf=0, minf=1 00:16:36.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.653 issued rwts: total=2250,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.653 job3: (groupid=0, jobs=1): err= 0: pid=3917378: Mon Jul 15 12:46:28 2024 00:16:36.653 read: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(15.3MiB/1066msec) 00:16:36.653 slat (nsec): min=1974, max=14729k, avg=141691.36, stdev=1008481.96 00:16:36.653 clat (usec): min=5487, max=76932, avg=19350.38, stdev=10406.92 00:16:36.653 lat (usec): min=5493, max=86834, avg=19492.07, stdev=10444.15 00:16:36.653 clat percentiles (usec): 00:16:36.653 | 1.00th=[ 6325], 5.00th=[13829], 10.00th=[15008], 20.00th=[15533], 00:16:36.653 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16319], 60.00th=[16712], 00:16:36.653 | 70.00th=[17957], 80.00th=[21627], 90.00th=[25822], 95.00th=[29230], 00:16:36.653 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:16:36.653 | 99.99th=[77071] 00:16:36.653 write: IOPS=3842, BW=15.0MiB/s (15.7MB/s)(16.0MiB/1066msec); 0 zone resets 00:16:36.653 slat (usec): min=3, max=13674, avg=100.09, stdev=396.62 00:16:36.653 clat (usec): min=3353, max=31153, avg=14546.61, stdev=3859.60 00:16:36.653 lat (usec): min=3365, max=31167, avg=14646.70, stdev=3892.97 00:16:36.653 clat percentiles (usec): 00:16:36.653 | 1.00th=[ 4359], 5.00th=[ 6915], 10.00th=[ 8356], 20.00th=[10028], 00:16:36.653 | 30.00th=[15008], 40.00th=[16057], 50.00th=[16450], 60.00th=[16712], 00:16:36.653 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:16:36.653 | 99.00th=[18744], 99.50th=[18744], 99.90th=[30540], 99.95th=[30540], 00:16:36.653 | 99.99th=[31065] 00:16:36.653 bw ( KiB/s): min=16384, max=16384, per=42.09%, avg=16384.00, stdev= 0.00, samples=2 00:16:36.653 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:36.653 lat (msec) : 4=0.25%, 10=11.48%, 20=75.62%, 50=11.06%, 100=1.59% 00:16:36.653 cpu : usr=3.66%, sys=5.45%, ctx=534, majf=0, minf=1 00:16:36.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.653 issued rwts: total=3908,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.653 00:16:36.654 Run status group 0 (all jobs): 00:16:36.654 READ: bw=33.8MiB/s (35.5MB/s), 6006KiB/s-14.3MiB/s (6150kB/s-15.0MB/s), io=36.1MiB (37.8MB), run=1008-1066msec 00:16:36.654 WRITE: bw=38.0MiB/s (39.9MB/s), 6726KiB/s-15.0MiB/s (6888kB/s-15.7MB/s), io=40.5MiB (42.5MB), run=1008-1066msec 00:16:36.654 00:16:36.654 Disk stats (read/write): 00:16:36.654 nvme0n1: ios=1561/1719, merge=0/0, ticks=43845/52376, in_queue=96221, util=98.60% 00:16:36.654 nvme0n2: ios=1053/1456, merge=0/0, ticks=35959/52939, in_queue=88898, util=97.26% 00:16:36.654 nvme0n3: ios=1812/2048, merge=0/0, ticks=42974/49104, in_queue=92078, util=99.69% 00:16:36.654 nvme0n4: ios=3227/3584, merge=0/0, ticks=55284/51290, in_queue=106574, util=100.00% 00:16:36.654 12:46:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:36.654 [global] 00:16:36.654 thread=1 00:16:36.654 invalidate=1 00:16:36.654 rw=randwrite 00:16:36.654 time_based=1 00:16:36.654 runtime=1 00:16:36.654 ioengine=libaio 00:16:36.654 direct=1 00:16:36.654 bs=4096 00:16:36.654 iodepth=128 00:16:36.654 norandommap=0 00:16:36.654 numjobs=1 00:16:36.654 00:16:36.654 verify_dump=1 00:16:36.654 verify_backlog=512 00:16:36.654 verify_state_save=0 00:16:36.654 do_verify=1 00:16:36.654 verify=crc32c-intel 00:16:36.654 [job0] 00:16:36.654 filename=/dev/nvme0n1 00:16:36.654 [job1] 00:16:36.654 filename=/dev/nvme0n2 00:16:36.654 [job2] 00:16:36.654 filename=/dev/nvme0n3 00:16:36.654 [job3] 00:16:36.654 filename=/dev/nvme0n4 00:16:36.654 Could not set queue depth (nvme0n1) 00:16:36.654 Could not set queue depth (nvme0n2) 00:16:36.654 Could not set queue depth (nvme0n3) 00:16:36.654 Could not set queue depth (nvme0n4) 00:16:36.912 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.912 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.912 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.912 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.912 fio-3.35 00:16:36.912 Starting 4 threads 00:16:38.289 00:16:38.289 job0: (groupid=0, jobs=1): err= 0: pid=3917837: Mon Jul 15 12:46:30 2024 00:16:38.289 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec) 00:16:38.289 slat (nsec): min=1848, max=17524k, avg=157271.87, stdev=1120917.58 00:16:38.289 clat (usec): min=5783, max=38418, avg=18977.43, stdev=4530.41 00:16:38.289 lat (usec): min=5789, max=38445, avg=19134.71, stdev=4605.85 00:16:38.289 clat percentiles (usec): 00:16:38.289 | 1.00th=[ 7570], 5.00th=[13698], 10.00th=[16057], 20.00th=[16581], 00:16:38.289 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[18482], 00:16:38.289 | 70.00th=[20055], 80.00th=[21365], 90.00th=[26084], 95.00th=[28443], 00:16:38.289 | 99.00th=[32113], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:16:38.289 | 99.99th=[38536] 00:16:38.289 write: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(14.6MiB/1018msec); 0 zone resets 00:16:38.289 slat (usec): min=2, max=13949, avg=110.05, stdev=528.64 00:16:38.289 clat (usec): min=1376, max=36298, avg=16364.75, stdev=3702.99 00:16:38.289 lat (usec): min=1388, max=36304, avg=16474.80, stdev=3744.16 00:16:38.289 clat percentiles (usec): 00:16:38.289 | 1.00th=[ 4490], 5.00th=[ 7898], 10.00th=[10945], 20.00th=[15139], 00:16:38.289 | 30.00th=[16450], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:16:38.289 | 70.00th=[17433], 80.00th=[17695], 90.00th=[20055], 95.00th=[20055], 00:16:38.289 | 99.00th=[25560], 99.50th=[25822], 99.90th=[32113], 99.95th=[36439], 00:16:38.289 | 99.99th=[36439] 00:16:38.289 bw ( KiB/s): min=12472, max=16384, per=34.87%, avg=14428.00, stdev=2766.20, samples=2 00:16:38.289 iops : min= 3118, max= 4096, avg=3607.00, stdev=691.55, samples=2 00:16:38.289 lat (msec) : 2=0.04%, 4=0.25%, 10=5.12%, 20=75.55%, 50=19.04% 00:16:38.289 cpu : usr=3.44%, sys=4.62%, ctx=478, majf=0, minf=1 00:16:38.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:38.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.289 issued rwts: total=3584,3734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.289 job1: (groupid=0, jobs=1): err= 0: pid=3917853: Mon Jul 15 12:46:30 2024 00:16:38.289 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:16:38.289 slat (usec): min=2, max=7843, avg=121.54, stdev=692.90 00:16:38.289 clat (usec): min=9250, max=24653, avg=14645.76, stdev=2226.66 00:16:38.289 lat (usec): min=9259, max=24671, avg=14767.30, stdev=2289.32 00:16:38.289 clat percentiles (usec): 00:16:38.289 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11731], 20.00th=[13829], 00:16:38.289 | 30.00th=[14091], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:16:38.289 | 70.00th=[14746], 80.00th=[16581], 90.00th=[17433], 95.00th=[18744], 00:16:38.289 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23725], 99.95th=[24511], 00:16:38.289 | 99.99th=[24773] 00:16:38.289 write: IOPS=4411, BW=17.2MiB/s (18.1MB/s)(17.4MiB/1008msec); 0 zone resets 00:16:38.289 slat (usec): min=3, max=6957, avg=106.14, stdev=368.20 00:16:38.289 clat (usec): min=7231, max=24204, avg=15179.80, stdev=2042.28 00:16:38.289 lat (usec): min=7514, max=24807, avg=15285.94, stdev=2053.35 00:16:38.289 clat percentiles (usec): 00:16:38.289 | 1.00th=[ 9110], 5.00th=[12256], 10.00th=[13698], 20.00th=[14222], 00:16:38.289 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:16:38.289 | 70.00th=[15270], 80.00th=[16909], 90.00th=[17433], 95.00th=[18744], 00:16:38.289 | 99.00th=[22152], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:16:38.289 | 99.99th=[24249] 00:16:38.289 bw ( KiB/s): min=16584, max=17976, per=41.76%, avg=17280.00, stdev=984.29, samples=2 00:16:38.289 iops : min= 4146, max= 4494, avg=4320.00, stdev=246.07, samples=2 00:16:38.289 lat (msec) : 10=1.93%, 20=95.77%, 50=2.29% 00:16:38.289 cpu : usr=4.07%, sys=6.06%, ctx=621, majf=0, minf=1 00:16:38.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:38.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.289 issued rwts: total=4096,4447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.289 job2: (groupid=0, jobs=1): err= 0: pid=3917876: Mon Jul 15 12:46:30 2024 00:16:38.289 read: IOPS=851, BW=3407KiB/s (3488kB/s)(3560KiB/1045msec) 00:16:38.289 slat (nsec): min=1885, max=46324k, avg=409866.99, stdev=3360386.06 00:16:38.289 clat (msec): min=32, max=104, avg=56.09, stdev=13.29 00:16:38.289 lat (msec): min=32, max=104, avg=56.50, stdev=13.51 00:16:38.289 clat percentiles (msec): 00:16:38.289 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 43], 00:16:38.289 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 59], 00:16:38.289 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 78], 00:16:38.289 | 99.00th=[ 79], 99.50th=[ 79], 99.90th=[ 105], 99.95th=[ 105], 00:16:38.289 | 99.99th=[ 105] 00:16:38.289 write: IOPS=979, BW=3920KiB/s (4014kB/s)(4096KiB/1045msec); 0 zone resets 00:16:38.289 slat (usec): min=3, max=52758, avg=614.72, stdev=4203.65 00:16:38.289 clat (msec): min=13, max=312, avg=80.63, stdev=69.79 00:16:38.289 lat (msec): min=13, max=312, avg=81.25, stdev=70.22 00:16:38.289 clat percentiles (msec): 00:16:38.289 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 36], 00:16:38.289 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 57], 00:16:38.289 | 70.00th=[ 58], 80.00th=[ 97], 90.00th=[ 199], 95.00th=[ 259], 00:16:38.289 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:16:38.289 | 99.99th=[ 313] 00:16:38.289 bw ( KiB/s): min= 3984, max= 4208, per=9.90%, avg=4096.00, stdev=158.39, samples=2 00:16:38.289 iops : min= 996, max= 1052, avg=1024.00, stdev=39.60, samples=2 00:16:38.289 lat (msec) : 20=0.31%, 50=25.34%, 100=63.64%, 250=7.42%, 500=3.29% 00:16:38.289 cpu : usr=0.57%, sys=1.25%, ctx=90, majf=0, minf=1 00:16:38.289 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:38.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.289 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.289 issued rwts: total=890,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.289 job3: (groupid=0, jobs=1): err= 0: pid=3917884: Mon Jul 15 12:46:30 2024 00:16:38.289 read: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec) 00:16:38.289 slat (usec): min=2, max=57272, avg=330.41, stdev=2838.35 00:16:38.289 clat (msec): min=26, max=116, avg=43.45, stdev=15.65 00:16:38.289 lat (msec): min=26, max=116, avg=43.78, stdev=15.91 00:16:38.289 clat percentiles (msec): 00:16:38.289 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:16:38.289 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 39], 60.00th=[ 42], 00:16:38.289 | 70.00th=[ 50], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 72], 00:16:38.289 | 99.00th=[ 86], 99.50th=[ 97], 99.90th=[ 111], 99.95th=[ 116], 00:16:38.289 | 99.99th=[ 116] 00:16:38.289 write: IOPS=1579, BW=6319KiB/s (6471kB/s)(6420KiB/1016msec); 0 zone resets 00:16:38.289 slat (usec): min=6, max=44485, avg=301.14, stdev=2839.17 00:16:38.289 clat (usec): min=1154, max=101099, avg=38711.27, stdev=13206.06 00:16:38.289 lat (usec): min=1164, max=101129, avg=39012.41, stdev=13528.53 00:16:38.289 clat percentiles (msec): 00:16:38.289 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 28], 00:16:38.289 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 43], 00:16:38.289 | 70.00th=[ 50], 80.00th=[ 57], 90.00th=[ 57], 95.00th=[ 57], 00:16:38.289 | 99.00th=[ 58], 99.50th=[ 68], 99.90th=[ 101], 99.95th=[ 102], 00:16:38.289 | 99.99th=[ 102] 00:16:38.289 bw ( KiB/s): min= 4960, max= 7328, per=14.85%, avg=6144.00, stdev=1674.43, samples=2 00:16:38.289 iops : min= 1240, max= 1832, avg=1536.00, stdev=418.61, samples=2 00:16:38.289 lat (msec) : 2=0.06%, 20=0.41%, 50=69.75%, 100=29.64%, 250=0.13% 00:16:38.289 cpu : usr=0.89%, sys=2.07%, ctx=68, majf=0, minf=1 00:16:38.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:16:38.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.289 issued rwts: total=1536,1605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.289 00:16:38.289 Run status group 0 (all jobs): 00:16:38.289 READ: bw=37.8MiB/s (39.6MB/s), 3407KiB/s-15.9MiB/s (3488kB/s-16.6MB/s), io=39.5MiB (41.4MB), run=1008-1045msec 00:16:38.289 WRITE: bw=40.4MiB/s (42.4MB/s), 3920KiB/s-17.2MiB/s (4014kB/s-18.1MB/s), io=42.2MiB (44.3MB), run=1008-1045msec 00:16:38.289 00:16:38.289 Disk stats (read/write): 00:16:38.289 nvme0n1: ios=2987/3072, merge=0/0, ticks=52725/47767, in_queue=100492, util=84.97% 00:16:38.289 nvme0n2: ios=3598/3615, merge=0/0, ticks=26428/25039, in_queue=51467, util=99.79% 00:16:38.289 nvme0n3: ios=547/743, merge=0/0, ticks=30083/71292, in_queue=101375, util=97.44% 00:16:38.290 nvme0n4: ios=1033/1352, merge=0/0, ticks=49309/51309, in_queue=100618, util=96.44% 00:16:38.290 12:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:38.290 12:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3917972 00:16:38.290 12:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:38.290 12:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:38.290 [global] 00:16:38.290 thread=1 00:16:38.290 invalidate=1 00:16:38.290 rw=read 00:16:38.290 time_based=1 00:16:38.290 runtime=10 00:16:38.290 ioengine=libaio 00:16:38.290 direct=1 00:16:38.290 bs=4096 00:16:38.290 iodepth=1 00:16:38.290 norandommap=1 00:16:38.290 numjobs=1 00:16:38.290 00:16:38.290 [job0] 00:16:38.290 filename=/dev/nvme0n1 00:16:38.290 [job1] 00:16:38.290 filename=/dev/nvme0n2 00:16:38.290 [job2] 00:16:38.290 filename=/dev/nvme0n3 00:16:38.290 [job3] 00:16:38.290 filename=/dev/nvme0n4 00:16:38.290 Could not set queue depth (nvme0n1) 00:16:38.290 Could not set queue depth (nvme0n2) 00:16:38.290 Could not set queue depth (nvme0n3) 00:16:38.290 Could not set queue depth (nvme0n4) 00:16:38.548 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.548 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.548 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.548 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.548 fio-3.35 00:16:38.548 Starting 4 threads 00:16:41.110 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:41.367 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:41.625 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=696320, buflen=4096 00:16:41.625 fio: pid=3918315, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:41.882 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.883 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:41.883 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=25223168, buflen=4096 00:16:41.883 fio: pid=3918314, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:42.141 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5144576, buflen=4096 00:16:42.141 fio: pid=3918311, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:42.141 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.141 12:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:42.141 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=38133760, buflen=4096 00:16:42.141 fio: pid=3918312, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:42.400 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.400 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:42.400 00:16:42.400 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3918311: Mon Jul 15 12:46:34 2024 00:16:42.400 read: IOPS=392, BW=1569KiB/s (1607kB/s)(5024KiB/3202msec) 00:16:42.400 slat (usec): min=6, max=17719, avg=28.03, stdev=545.40 00:16:42.400 clat (usec): min=340, max=42307, avg=2502.59, stdev=9005.35 00:16:42.400 lat (usec): min=347, max=49046, avg=2516.53, stdev=9036.43 00:16:42.400 clat percentiles (usec): 00:16:42.400 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 388], 00:16:42.400 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 420], 00:16:42.400 | 70.00th=[ 433], 80.00th=[ 445], 90.00th=[ 469], 95.00th=[40633], 00:16:42.400 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:42.400 | 99.99th=[42206] 00:16:42.400 bw ( KiB/s): min= 96, max= 6968, per=8.54%, avg=1669.33, stdev=2790.19, samples=6 00:16:42.400 iops : min= 24, max= 1742, avg=417.33, stdev=697.55, samples=6 00:16:42.400 lat (usec) : 500=92.76%, 750=1.99% 00:16:42.400 lat (msec) : 10=0.08%, 50=5.09% 00:16:42.400 cpu : usr=0.12%, sys=0.34%, ctx=1261, majf=0, minf=1 00:16:42.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.400 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.400 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.400 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3918312: Mon Jul 15 12:46:34 2024 00:16:42.400 read: IOPS=2694, BW=10.5MiB/s (11.0MB/s)(36.4MiB/3456msec) 00:16:42.400 slat (usec): min=5, max=24833, avg=17.36, stdev=368.07 00:16:42.400 clat (usec): min=240, max=41501, avg=349.36, stdev=602.91 00:16:42.400 lat (usec): min=249, max=41508, avg=366.72, stdev=707.26 00:16:42.400 clat percentiles (usec): 00:16:42.400 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:16:42.400 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:16:42.400 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 412], 00:16:42.400 | 99.00th=[ 498], 99.50th=[ 562], 99.90th=[ 742], 99.95th=[ 1500], 00:16:42.400 | 99.99th=[41681] 00:16:42.400 bw ( KiB/s): min= 9813, max=12280, per=56.75%, avg=11096.83, stdev=966.05, samples=6 00:16:42.400 iops : min= 2453, max= 3070, avg=2774.17, stdev=241.58, samples=6 00:16:42.400 lat (usec) : 250=0.02%, 500=99.02%, 750=0.85%, 1000=0.01% 00:16:42.400 lat (msec) : 2=0.05%, 4=0.01%, 50=0.02% 00:16:42.400 cpu : usr=1.36%, sys=4.34%, ctx=9319, majf=0, minf=1 00:16:42.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.400 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.400 issued rwts: total=9311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.400 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3918314: Mon Jul 15 12:46:34 2024 00:16:42.400 read: IOPS=2073, BW=8291KiB/s (8490kB/s)(24.1MiB/2971msec) 00:16:42.400 slat (usec): min=5, max=11861, avg=11.33, stdev=209.19 00:16:42.400 clat (usec): min=345, max=41517, avg=465.57, stdev=735.97 00:16:42.400 lat (usec): min=353, max=41524, avg=476.90, stdev=765.98 00:16:42.400 clat percentiles (usec): 00:16:42.400 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:16:42.401 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 445], 60.00th=[ 453], 00:16:42.401 | 70.00th=[ 461], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 515], 00:16:42.401 | 99.00th=[ 652], 99.50th=[ 693], 99.90th=[ 750], 99.95th=[ 930], 00:16:42.401 | 99.99th=[41681] 00:16:42.401 bw ( KiB/s): min= 8624, max= 8752, per=44.48%, avg=8697.60, stdev=55.83, samples=5 00:16:42.401 iops : min= 2156, max= 2188, avg=2174.40, stdev=13.96, samples=5 00:16:42.401 lat (usec) : 500=93.64%, 750=6.25%, 1000=0.06% 00:16:42.401 lat (msec) : 50=0.03% 00:16:42.401 cpu : usr=0.57%, sys=1.95%, ctx=6161, majf=0, minf=1 00:16:42.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.401 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.401 issued rwts: total=6159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.401 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3918315: Mon Jul 15 12:46:34 2024 00:16:42.401 read: IOPS=63, BW=252KiB/s (258kB/s)(680KiB/2702msec) 00:16:42.401 slat (nsec): min=6335, max=31161, avg=13290.58, stdev=7674.69 00:16:42.401 clat (usec): min=384, max=42927, avg=15756.00, stdev=19989.15 00:16:42.401 lat (usec): min=391, max=42950, avg=15769.25, stdev=19995.51 00:16:42.401 clat percentiles (usec): 00:16:42.401 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 416], 20.00th=[ 429], 00:16:42.401 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 701], 00:16:42.401 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:42.401 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:42.401 | 99.99th=[42730] 00:16:42.401 bw ( KiB/s): min= 96, max= 896, per=1.35%, avg=264.00, stdev=353.36, samples=5 00:16:42.401 iops : min= 24, max= 224, avg=66.00, stdev=88.34, samples=5 00:16:42.401 lat (usec) : 500=56.73%, 750=3.51%, 1000=1.17% 00:16:42.401 lat (msec) : 2=1.17%, 50=36.84% 00:16:42.401 cpu : usr=0.04%, sys=0.07%, ctx=171, majf=0, minf=2 00:16:42.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.401 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.401 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.401 00:16:42.401 Run status group 0 (all jobs): 00:16:42.401 READ: bw=19.1MiB/s (20.0MB/s), 252KiB/s-10.5MiB/s (258kB/s-11.0MB/s), io=66.0MiB (69.2MB), run=2702-3456msec 00:16:42.401 00:16:42.401 Disk stats (read/write): 00:16:42.401 nvme0n1: ios=1283/0, merge=0/0, ticks=3807/0, in_queue=3807, util=98.98% 00:16:42.401 nvme0n2: ios=9049/0, merge=0/0, ticks=3005/0, in_queue=3005, util=93.70% 00:16:42.401 nvme0n3: ios=6094/0, merge=0/0, ticks=2719/0, in_queue=2719, util=95.61% 00:16:42.401 nvme0n4: ios=167/0, merge=0/0, ticks=2554/0, in_queue=2554, util=96.45% 00:16:42.659 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.659 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:42.917 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.918 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:43.176 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.176 12:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:43.436 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.436 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3917972 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:43.695 nvmf hotplug test: fio failed as expected 00:16:43.695 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.954 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.954 rmmod nvme_tcp 00:16:43.954 rmmod nvme_fabrics 00:16:43.954 rmmod nvme_keyring 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3914674 ']' 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3914674 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3914674 ']' 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3914674 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3914674 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3914674' 00:16:44.213 killing process with pid 3914674 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3914674 00:16:44.213 12:46:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3914674 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.473 12:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.376 12:46:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.376 00:16:46.376 real 0m28.943s 00:16:46.376 user 2m28.632s 00:16:46.376 sys 0m8.392s 00:16:46.376 12:46:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.376 12:46:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 ************************************ 00:16:46.376 END TEST nvmf_fio_target 00:16:46.376 ************************************ 00:16:46.376 12:46:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:46.376 12:46:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:46.376 12:46:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.376 12:46:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.376 12:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 ************************************ 00:16:46.376 START TEST nvmf_bdevio 00:16:46.376 ************************************ 00:16:46.376 12:46:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:46.721 * Looking for test storage... 00:16:46.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.721 12:46:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.348 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:53.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:53.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:53.349 Found net devices under 0000:af:00.0: cvl_0_0 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:53.349 Found net devices under 0000:af:00.1: cvl_0_1 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:16:53.349 00:16:53.349 --- 10.0.0.2 ping statistics --- 00:16:53.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.349 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:16:53.349 00:16:53.349 --- 10.0.0.1 ping statistics --- 00:16:53.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.349 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3922846 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3922846 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3922846 ']' 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.349 12:46:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.349 [2024-07-15 12:46:44.395347] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:16:53.349 [2024-07-15 12:46:44.395408] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.349 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.349 [2024-07-15 12:46:44.513498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.349 [2024-07-15 12:46:44.663346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.349 [2024-07-15 12:46:44.663414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.349 [2024-07-15 12:46:44.663435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.349 [2024-07-15 12:46:44.663453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.349 [2024-07-15 12:46:44.663469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.349 [2024-07-15 12:46:44.663618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:53.349 [2024-07-15 12:46:44.663730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:53.349 [2024-07-15 12:46:44.663841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:53.349 [2024-07-15 12:46:44.663847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 [2024-07-15 12:46:45.390340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 Malloc0 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 [2024-07-15 12:46:45.454564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.609 { 00:16:53.609 "params": { 00:16:53.609 "name": "Nvme$subsystem", 00:16:53.609 "trtype": "$TEST_TRANSPORT", 00:16:53.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.609 "adrfam": "ipv4", 00:16:53.609 "trsvcid": "$NVMF_PORT", 00:16:53.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.609 "hdgst": ${hdgst:-false}, 00:16:53.609 "ddgst": ${ddgst:-false} 00:16:53.609 }, 00:16:53.609 "method": "bdev_nvme_attach_controller" 00:16:53.609 } 00:16:53.609 EOF 00:16:53.609 )") 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:53.609 12:46:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.609 "params": { 00:16:53.609 "name": "Nvme1", 00:16:53.609 "trtype": "tcp", 00:16:53.609 "traddr": "10.0.0.2", 00:16:53.609 "adrfam": "ipv4", 00:16:53.609 "trsvcid": "4420", 00:16:53.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.609 "hdgst": false, 00:16:53.609 "ddgst": false 00:16:53.609 }, 00:16:53.609 "method": "bdev_nvme_attach_controller" 00:16:53.609 }' 00:16:53.609 [2024-07-15 12:46:45.508707] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:16:53.609 [2024-07-15 12:46:45.508769] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923130 ] 00:16:53.609 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.868 [2024-07-15 12:46:45.590862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.868 [2024-07-15 12:46:45.678552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.868 [2024-07-15 12:46:45.678664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.868 [2024-07-15 12:46:45.678665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.127 I/O targets: 00:16:54.127 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:54.127 00:16:54.127 00:16:54.127 CUnit - A unit testing framework for C - Version 2.1-3 00:16:54.127 http://cunit.sourceforge.net/ 00:16:54.127 00:16:54.127 00:16:54.127 Suite: bdevio tests on: Nvme1n1 00:16:54.127 Test: blockdev write read block ...passed 00:16:54.127 Test: blockdev write zeroes read block ...passed 00:16:54.127 Test: blockdev write zeroes read no split ...passed 00:16:54.127 Test: blockdev write zeroes read split ...passed 00:16:54.127 Test: blockdev write zeroes read split partial ...passed 00:16:54.127 Test: blockdev reset ...[2024-07-15 12:46:46.045662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:54.127 [2024-07-15 12:46:46.045739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1666c80 (9): Bad file descriptor 00:16:54.386 [2024-07-15 12:46:46.143327] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.386 passed 00:16:54.386 Test: blockdev write read 8 blocks ...passed 00:16:54.386 Test: blockdev write read size > 128k ...passed 00:16:54.386 Test: blockdev write read invalid size ...passed 00:16:54.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:54.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:54.386 Test: blockdev write read max offset ...passed 00:16:54.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:54.645 Test: blockdev writev readv 8 blocks ...passed 00:16:54.645 Test: blockdev writev readv 30 x 1block ...passed 00:16:54.645 Test: blockdev writev readv block ...passed 00:16:54.646 Test: blockdev writev readv size > 128k ...passed 00:16:54.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:54.646 Test: blockdev comparev and writev ...[2024-07-15 12:46:46.404447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.404551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.404576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.405215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.405248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.405298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.405321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.405979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.406010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.406047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.406068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.406744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.406814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.646 [2024-07-15 12:46:46.406834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:54.646 passed 00:16:54.646 Test: blockdev nvme passthru rw ...passed 00:16:54.646 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:46:46.488858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.646 [2024-07-15 12:46:46.488897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.489165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.646 [2024-07-15 12:46:46.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.489463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.646 [2024-07-15 12:46:46.489495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:54.646 [2024-07-15 12:46:46.489750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.646 [2024-07-15 12:46:46.489780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:54.646 passed 00:16:54.646 Test: blockdev nvme admin passthru ...passed 00:16:54.646 Test: blockdev copy ...passed 00:16:54.646 00:16:54.646 Run Summary: Type Total Ran Passed Failed Inactive 00:16:54.646 suites 1 1 n/a 0 0 00:16:54.646 tests 23 23 23 0 0 00:16:54.646 asserts 152 152 152 0 n/a 00:16:54.646 00:16:54.646 Elapsed time = 1.321 seconds 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.905 rmmod nvme_tcp 00:16:54.905 rmmod nvme_fabrics 00:16:54.905 rmmod nvme_keyring 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3922846 ']' 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3922846 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3922846 ']' 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3922846 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922846 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922846' 00:16:54.905 killing process with pid 3922846 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3922846 00:16:54.905 12:46:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3922846 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.473 12:46:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.379 12:46:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.379 00:16:57.379 real 0m10.928s 00:16:57.379 user 0m13.811s 00:16:57.379 sys 0m5.158s 00:16:57.379 12:46:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.379 12:46:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:57.379 ************************************ 00:16:57.379 END TEST nvmf_bdevio 00:16:57.379 ************************************ 00:16:57.379 12:46:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.379 12:46:49 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.379 12:46:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.379 12:46:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.379 12:46:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.379 ************************************ 00:16:57.379 START TEST nvmf_auth_target 00:16:57.379 ************************************ 00:16:57.379 12:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.638 * Looking for test storage... 00:16:57.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.638 12:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.231 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:04.232 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:04.232 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:04.232 Found net devices under 0000:af:00.0: cvl_0_0 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:04.232 Found net devices under 0000:af:00.1: cvl_0_1 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.232 12:46:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:17:04.232 00:17:04.232 --- 10.0.0.2 ping statistics --- 00:17:04.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.232 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:17:04.232 00:17:04.232 --- 10.0.0.1 ping statistics --- 00:17:04.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.232 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3926865 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3926865 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3926865 ']' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3927048 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4c120fa848769818c5d12d568b95620287fa87085e46ea2 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dah 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4c120fa848769818c5d12d568b95620287fa87085e46ea2 0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4c120fa848769818c5d12d568b95620287fa87085e46ea2 0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4c120fa848769818c5d12d568b95620287fa87085e46ea2 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dah 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dah 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Dah 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.232 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dee92b164343d2144e86a56d167a7828213eec4158791361278f43f4dfeed8b8 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.g7Q 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dee92b164343d2144e86a56d167a7828213eec4158791361278f43f4dfeed8b8 3 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dee92b164343d2144e86a56d167a7828213eec4158791361278f43f4dfeed8b8 3 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dee92b164343d2144e86a56d167a7828213eec4158791361278f43f4dfeed8b8 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.g7Q 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.g7Q 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.g7Q 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a258aadfd54205d8e317a80800ed0c5f 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wTA 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a258aadfd54205d8e317a80800ed0c5f 1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a258aadfd54205d8e317a80800ed0c5f 1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a258aadfd54205d8e317a80800ed0c5f 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wTA 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wTA 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.wTA 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b397af85b4fb5e81f7dd7372a25537b89c8196f8f65f4b4f 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mTj 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b397af85b4fb5e81f7dd7372a25537b89c8196f8f65f4b4f 2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b397af85b4fb5e81f7dd7372a25537b89c8196f8f65f4b4f 2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b397af85b4fb5e81f7dd7372a25537b89c8196f8f65f4b4f 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mTj 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mTj 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mTj 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d206b814aa34988258eceefedfa1be2f227b7bb34b4f684 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8Tk 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d206b814aa34988258eceefedfa1be2f227b7bb34b4f684 2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d206b814aa34988258eceefedfa1be2f227b7bb34b4f684 2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d206b814aa34988258eceefedfa1be2f227b7bb34b4f684 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8Tk 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8Tk 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.8Tk 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4ae218203de87f6a6f88c5b112ebd9e1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LLl 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4ae218203de87f6a6f88c5b112ebd9e1 1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4ae218203de87f6a6f88c5b112ebd9e1 1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4ae218203de87f6a6f88c5b112ebd9e1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:04.233 12:46:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LLl 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LLl 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.LLl 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d23675151a36ad73a9bd928797a7b48b19baa89ebba77bd4f22b3ebe71f31c1 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i91 00:17:04.233 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d23675151a36ad73a9bd928797a7b48b19baa89ebba77bd4f22b3ebe71f31c1 3 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d23675151a36ad73a9bd928797a7b48b19baa89ebba77bd4f22b3ebe71f31c1 3 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d23675151a36ad73a9bd928797a7b48b19baa89ebba77bd4f22b3ebe71f31c1 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i91 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i91 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.i91 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3926865 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3926865 ']' 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.234 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3927048 /var/tmp/host.sock 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3927048 ']' 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:04.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.493 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dah 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Dah 00:17:04.752 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Dah 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.g7Q ]] 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g7Q 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g7Q 00:17:05.011 12:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g7Q 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wTA 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wTA 00:17:05.270 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wTA 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mTj ]] 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mTj 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mTj 00:17:05.529 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mTj 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8Tk 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8Tk 00:17:05.788 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8Tk 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.LLl ]] 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LLl 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LLl 00:17:06.047 12:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LLl 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.i91 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.i91 00:17:06.307 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.i91 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.566 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.825 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.085 00:17:07.085 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.085 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.085 12:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.343 { 00:17:07.343 "cntlid": 1, 00:17:07.343 "qid": 0, 00:17:07.343 "state": "enabled", 00:17:07.343 "thread": "nvmf_tgt_poll_group_000", 00:17:07.343 "listen_address": { 00:17:07.343 "trtype": "TCP", 00:17:07.343 "adrfam": "IPv4", 00:17:07.343 "traddr": "10.0.0.2", 00:17:07.343 "trsvcid": "4420" 00:17:07.343 }, 00:17:07.343 "peer_address": { 00:17:07.343 "trtype": "TCP", 00:17:07.343 "adrfam": "IPv4", 00:17:07.343 "traddr": "10.0.0.1", 00:17:07.343 "trsvcid": "57854" 00:17:07.343 }, 00:17:07.343 "auth": { 00:17:07.343 "state": "completed", 00:17:07.343 "digest": "sha256", 00:17:07.343 "dhgroup": "null" 00:17:07.343 } 00:17:07.343 } 00:17:07.343 ]' 00:17:07.343 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.603 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.862 12:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.797 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.056 00:17:09.056 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.056 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.056 12:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.314 { 00:17:09.314 "cntlid": 3, 00:17:09.314 "qid": 0, 00:17:09.314 "state": "enabled", 00:17:09.314 "thread": "nvmf_tgt_poll_group_000", 00:17:09.314 "listen_address": { 00:17:09.314 "trtype": "TCP", 00:17:09.314 "adrfam": "IPv4", 00:17:09.314 "traddr": "10.0.0.2", 00:17:09.314 "trsvcid": "4420" 00:17:09.314 }, 00:17:09.314 "peer_address": { 00:17:09.314 "trtype": "TCP", 00:17:09.314 "adrfam": "IPv4", 00:17:09.314 "traddr": "10.0.0.1", 00:17:09.314 "trsvcid": "57874" 00:17:09.314 }, 00:17:09.314 "auth": { 00:17:09.314 "state": "completed", 00:17:09.314 "digest": "sha256", 00:17:09.314 "dhgroup": "null" 00:17:09.314 } 00:17:09.314 } 00:17:09.314 ]' 00:17:09.314 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.572 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.830 12:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.765 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.024 12:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.283 00:17:11.283 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.283 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.283 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.849 { 00:17:11.849 "cntlid": 5, 00:17:11.849 "qid": 0, 00:17:11.849 "state": "enabled", 00:17:11.849 "thread": "nvmf_tgt_poll_group_000", 00:17:11.849 "listen_address": { 00:17:11.849 "trtype": "TCP", 00:17:11.849 "adrfam": "IPv4", 00:17:11.849 "traddr": "10.0.0.2", 00:17:11.849 "trsvcid": "4420" 00:17:11.849 }, 00:17:11.849 "peer_address": { 00:17:11.849 "trtype": "TCP", 00:17:11.849 "adrfam": "IPv4", 00:17:11.849 "traddr": "10.0.0.1", 00:17:11.849 "trsvcid": "55956" 00:17:11.849 }, 00:17:11.849 "auth": { 00:17:11.849 "state": "completed", 00:17:11.849 "digest": "sha256", 00:17:11.849 "dhgroup": "null" 00:17:11.849 } 00:17:11.849 } 00:17:11.849 ]' 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.849 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.107 12:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.043 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.044 12:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.303 00:17:13.562 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.562 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.562 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.821 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.821 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.821 12:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.821 12:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.080 { 00:17:14.080 "cntlid": 7, 00:17:14.080 "qid": 0, 00:17:14.080 "state": "enabled", 00:17:14.080 "thread": "nvmf_tgt_poll_group_000", 00:17:14.080 "listen_address": { 00:17:14.080 "trtype": "TCP", 00:17:14.080 "adrfam": "IPv4", 00:17:14.080 "traddr": "10.0.0.2", 00:17:14.080 "trsvcid": "4420" 00:17:14.080 }, 00:17:14.080 "peer_address": { 00:17:14.080 "trtype": "TCP", 00:17:14.080 "adrfam": "IPv4", 00:17:14.080 "traddr": "10.0.0.1", 00:17:14.080 "trsvcid": "55970" 00:17:14.080 }, 00:17:14.080 "auth": { 00:17:14.080 "state": "completed", 00:17:14.080 "digest": "sha256", 00:17:14.080 "dhgroup": "null" 00:17:14.080 } 00:17:14.080 } 00:17:14.080 ]' 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.080 12:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.339 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.275 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.276 12:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.276 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.842 00:17:15.842 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.842 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.842 12:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.100 { 00:17:16.100 "cntlid": 9, 00:17:16.100 "qid": 0, 00:17:16.100 "state": "enabled", 00:17:16.100 "thread": "nvmf_tgt_poll_group_000", 00:17:16.100 "listen_address": { 00:17:16.100 "trtype": "TCP", 00:17:16.100 "adrfam": "IPv4", 00:17:16.100 "traddr": "10.0.0.2", 00:17:16.100 "trsvcid": "4420" 00:17:16.100 }, 00:17:16.100 "peer_address": { 00:17:16.100 "trtype": "TCP", 00:17:16.100 "adrfam": "IPv4", 00:17:16.100 "traddr": "10.0.0.1", 00:17:16.100 "trsvcid": "56002" 00:17:16.100 }, 00:17:16.100 "auth": { 00:17:16.100 "state": "completed", 00:17:16.100 "digest": "sha256", 00:17:16.100 "dhgroup": "ffdhe2048" 00:17:16.100 } 00:17:16.100 } 00:17:16.100 ]' 00:17:16.100 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.359 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.617 12:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.553 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.812 00:17:17.812 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.812 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.812 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.071 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.071 { 00:17:18.071 "cntlid": 11, 00:17:18.071 "qid": 0, 00:17:18.071 "state": "enabled", 00:17:18.071 "thread": "nvmf_tgt_poll_group_000", 00:17:18.071 "listen_address": { 00:17:18.071 "trtype": "TCP", 00:17:18.071 "adrfam": "IPv4", 00:17:18.071 "traddr": "10.0.0.2", 00:17:18.071 "trsvcid": "4420" 00:17:18.071 }, 00:17:18.071 "peer_address": { 00:17:18.071 "trtype": "TCP", 00:17:18.071 "adrfam": "IPv4", 00:17:18.071 "traddr": "10.0.0.1", 00:17:18.071 "trsvcid": "56020" 00:17:18.071 }, 00:17:18.071 "auth": { 00:17:18.071 "state": "completed", 00:17:18.071 "digest": "sha256", 00:17:18.071 "dhgroup": "ffdhe2048" 00:17:18.071 } 00:17:18.072 } 00:17:18.072 ]' 00:17:18.072 12:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.331 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.590 12:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.525 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.784 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.044 00:17:20.044 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.044 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.044 12:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.610 { 00:17:20.610 "cntlid": 13, 00:17:20.610 "qid": 0, 00:17:20.610 "state": "enabled", 00:17:20.610 "thread": "nvmf_tgt_poll_group_000", 00:17:20.610 "listen_address": { 00:17:20.610 "trtype": "TCP", 00:17:20.610 "adrfam": "IPv4", 00:17:20.610 "traddr": "10.0.0.2", 00:17:20.610 "trsvcid": "4420" 00:17:20.610 }, 00:17:20.610 "peer_address": { 00:17:20.610 "trtype": "TCP", 00:17:20.610 "adrfam": "IPv4", 00:17:20.610 "traddr": "10.0.0.1", 00:17:20.610 "trsvcid": "42710" 00:17:20.610 }, 00:17:20.610 "auth": { 00:17:20.610 "state": "completed", 00:17:20.610 "digest": "sha256", 00:17:20.610 "dhgroup": "ffdhe2048" 00:17:20.610 } 00:17:20.610 } 00:17:20.610 ]' 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.610 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.868 12:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.815 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.074 12:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.332 00:17:22.332 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.332 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.332 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.589 { 00:17:22.589 "cntlid": 15, 00:17:22.589 "qid": 0, 00:17:22.589 "state": "enabled", 00:17:22.589 "thread": "nvmf_tgt_poll_group_000", 00:17:22.589 "listen_address": { 00:17:22.589 "trtype": "TCP", 00:17:22.589 "adrfam": "IPv4", 00:17:22.589 "traddr": "10.0.0.2", 00:17:22.589 "trsvcid": "4420" 00:17:22.589 }, 00:17:22.589 "peer_address": { 00:17:22.589 "trtype": "TCP", 00:17:22.589 "adrfam": "IPv4", 00:17:22.589 "traddr": "10.0.0.1", 00:17:22.589 "trsvcid": "42726" 00:17:22.589 }, 00:17:22.589 "auth": { 00:17:22.589 "state": "completed", 00:17:22.589 "digest": "sha256", 00:17:22.589 "dhgroup": "ffdhe2048" 00:17:22.589 } 00:17:22.589 } 00:17:22.589 ]' 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.589 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.847 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.847 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.847 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.105 12:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:17:23.672 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.931 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.191 12:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.450 00:17:24.450 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.450 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.450 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.708 { 00:17:24.708 "cntlid": 17, 00:17:24.708 "qid": 0, 00:17:24.708 "state": "enabled", 00:17:24.708 "thread": "nvmf_tgt_poll_group_000", 00:17:24.708 "listen_address": { 00:17:24.708 "trtype": "TCP", 00:17:24.708 "adrfam": "IPv4", 00:17:24.708 "traddr": "10.0.0.2", 00:17:24.708 "trsvcid": "4420" 00:17:24.708 }, 00:17:24.708 "peer_address": { 00:17:24.708 "trtype": "TCP", 00:17:24.708 "adrfam": "IPv4", 00:17:24.708 "traddr": "10.0.0.1", 00:17:24.708 "trsvcid": "42758" 00:17:24.708 }, 00:17:24.708 "auth": { 00:17:24.708 "state": "completed", 00:17:24.708 "digest": "sha256", 00:17:24.708 "dhgroup": "ffdhe3072" 00:17:24.708 } 00:17:24.708 } 00:17:24.708 ]' 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.708 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.966 12:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.900 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.159 12:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.417 00:17:26.417 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.417 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.417 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.676 { 00:17:26.676 "cntlid": 19, 00:17:26.676 "qid": 0, 00:17:26.676 "state": "enabled", 00:17:26.676 "thread": "nvmf_tgt_poll_group_000", 00:17:26.676 "listen_address": { 00:17:26.676 "trtype": "TCP", 00:17:26.676 "adrfam": "IPv4", 00:17:26.676 "traddr": "10.0.0.2", 00:17:26.676 "trsvcid": "4420" 00:17:26.676 }, 00:17:26.676 "peer_address": { 00:17:26.676 "trtype": "TCP", 00:17:26.676 "adrfam": "IPv4", 00:17:26.676 "traddr": "10.0.0.1", 00:17:26.676 "trsvcid": "42786" 00:17:26.676 }, 00:17:26.676 "auth": { 00:17:26.676 "state": "completed", 00:17:26.676 "digest": "sha256", 00:17:26.676 "dhgroup": "ffdhe3072" 00:17:26.676 } 00:17:26.676 } 00:17:26.676 ]' 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.676 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.935 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.935 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.935 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.935 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.935 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.193 12:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.131 12:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.131 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.699 00:17:28.699 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.699 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.699 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.957 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.957 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.957 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.957 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.215 12:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.215 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.215 { 00:17:29.215 "cntlid": 21, 00:17:29.215 "qid": 0, 00:17:29.215 "state": "enabled", 00:17:29.215 "thread": "nvmf_tgt_poll_group_000", 00:17:29.215 "listen_address": { 00:17:29.215 "trtype": "TCP", 00:17:29.215 "adrfam": "IPv4", 00:17:29.215 "traddr": "10.0.0.2", 00:17:29.215 "trsvcid": "4420" 00:17:29.215 }, 00:17:29.215 "peer_address": { 00:17:29.215 "trtype": "TCP", 00:17:29.215 "adrfam": "IPv4", 00:17:29.215 "traddr": "10.0.0.1", 00:17:29.215 "trsvcid": "42822" 00:17:29.215 }, 00:17:29.215 "auth": { 00:17:29.215 "state": "completed", 00:17:29.215 "digest": "sha256", 00:17:29.215 "dhgroup": "ffdhe3072" 00:17:29.215 } 00:17:29.215 } 00:17:29.215 ]' 00:17:29.215 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.215 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.215 12:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.215 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.215 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.215 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.215 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.215 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.473 12:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.425 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.738 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.008 00:17:31.008 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.008 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.008 12:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.266 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.266 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.266 12:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.266 12:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.267 { 00:17:31.267 "cntlid": 23, 00:17:31.267 "qid": 0, 00:17:31.267 "state": "enabled", 00:17:31.267 "thread": "nvmf_tgt_poll_group_000", 00:17:31.267 "listen_address": { 00:17:31.267 "trtype": "TCP", 00:17:31.267 "adrfam": "IPv4", 00:17:31.267 "traddr": "10.0.0.2", 00:17:31.267 "trsvcid": "4420" 00:17:31.267 }, 00:17:31.267 "peer_address": { 00:17:31.267 "trtype": "TCP", 00:17:31.267 "adrfam": "IPv4", 00:17:31.267 "traddr": "10.0.0.1", 00:17:31.267 "trsvcid": "48736" 00:17:31.267 }, 00:17:31.267 "auth": { 00:17:31.267 "state": "completed", 00:17:31.267 "digest": "sha256", 00:17:31.267 "dhgroup": "ffdhe3072" 00:17:31.267 } 00:17:31.267 } 00:17:31.267 ]' 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.267 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.525 12:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:17:32.458 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.458 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.459 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.717 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.976 00:17:32.976 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.976 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.976 12:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.544 { 00:17:33.544 "cntlid": 25, 00:17:33.544 "qid": 0, 00:17:33.544 "state": "enabled", 00:17:33.544 "thread": "nvmf_tgt_poll_group_000", 00:17:33.544 "listen_address": { 00:17:33.544 "trtype": "TCP", 00:17:33.544 "adrfam": "IPv4", 00:17:33.544 "traddr": "10.0.0.2", 00:17:33.544 "trsvcid": "4420" 00:17:33.544 }, 00:17:33.544 "peer_address": { 00:17:33.544 "trtype": "TCP", 00:17:33.544 "adrfam": "IPv4", 00:17:33.544 "traddr": "10.0.0.1", 00:17:33.544 "trsvcid": "48776" 00:17:33.544 }, 00:17:33.544 "auth": { 00:17:33.544 "state": "completed", 00:17:33.544 "digest": "sha256", 00:17:33.544 "dhgroup": "ffdhe4096" 00:17:33.544 } 00:17:33.544 } 00:17:33.544 ]' 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.544 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.802 12:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.737 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.995 12:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.252 00:17:35.252 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.252 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.252 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.818 { 00:17:35.818 "cntlid": 27, 00:17:35.818 "qid": 0, 00:17:35.818 "state": "enabled", 00:17:35.818 "thread": "nvmf_tgt_poll_group_000", 00:17:35.818 "listen_address": { 00:17:35.818 "trtype": "TCP", 00:17:35.818 "adrfam": "IPv4", 00:17:35.818 "traddr": "10.0.0.2", 00:17:35.818 "trsvcid": "4420" 00:17:35.818 }, 00:17:35.818 "peer_address": { 00:17:35.818 "trtype": "TCP", 00:17:35.818 "adrfam": "IPv4", 00:17:35.818 "traddr": "10.0.0.1", 00:17:35.818 "trsvcid": "48810" 00:17:35.818 }, 00:17:35.818 "auth": { 00:17:35.818 "state": "completed", 00:17:35.818 "digest": "sha256", 00:17:35.818 "dhgroup": "ffdhe4096" 00:17:35.818 } 00:17:35.818 } 00:17:35.818 ]' 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.818 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.076 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.076 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.076 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.076 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.076 12:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.334 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.276 12:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.276 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.212 00:17:38.212 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.212 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.212 12:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.471 { 00:17:38.471 "cntlid": 29, 00:17:38.471 "qid": 0, 00:17:38.471 "state": "enabled", 00:17:38.471 "thread": "nvmf_tgt_poll_group_000", 00:17:38.471 "listen_address": { 00:17:38.471 "trtype": "TCP", 00:17:38.471 "adrfam": "IPv4", 00:17:38.471 "traddr": "10.0.0.2", 00:17:38.471 "trsvcid": "4420" 00:17:38.471 }, 00:17:38.471 "peer_address": { 00:17:38.471 "trtype": "TCP", 00:17:38.471 "adrfam": "IPv4", 00:17:38.471 "traddr": "10.0.0.1", 00:17:38.471 "trsvcid": "48850" 00:17:38.471 }, 00:17:38.471 "auth": { 00:17:38.471 "state": "completed", 00:17:38.471 "digest": "sha256", 00:17:38.471 "dhgroup": "ffdhe4096" 00:17:38.471 } 00:17:38.471 } 00:17:38.471 ]' 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.471 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.730 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.730 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.730 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.730 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.730 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.989 12:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.923 12:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.489 00:17:40.489 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.489 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.489 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.772 { 00:17:40.772 "cntlid": 31, 00:17:40.772 "qid": 0, 00:17:40.772 "state": "enabled", 00:17:40.772 "thread": "nvmf_tgt_poll_group_000", 00:17:40.772 "listen_address": { 00:17:40.772 "trtype": "TCP", 00:17:40.772 "adrfam": "IPv4", 00:17:40.772 "traddr": "10.0.0.2", 00:17:40.772 "trsvcid": "4420" 00:17:40.772 }, 00:17:40.772 "peer_address": { 00:17:40.772 "trtype": "TCP", 00:17:40.772 "adrfam": "IPv4", 00:17:40.772 "traddr": "10.0.0.1", 00:17:40.772 "trsvcid": "47940" 00:17:40.772 }, 00:17:40.772 "auth": { 00:17:40.772 "state": "completed", 00:17:40.772 "digest": "sha256", 00:17:40.772 "dhgroup": "ffdhe4096" 00:17:40.772 } 00:17:40.772 } 00:17:40.772 ]' 00:17:40.772 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.029 12:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.287 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.220 12:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.787 00:17:42.787 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.787 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.787 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.046 { 00:17:43.046 "cntlid": 33, 00:17:43.046 "qid": 0, 00:17:43.046 "state": "enabled", 00:17:43.046 "thread": "nvmf_tgt_poll_group_000", 00:17:43.046 "listen_address": { 00:17:43.046 "trtype": "TCP", 00:17:43.046 "adrfam": "IPv4", 00:17:43.046 "traddr": "10.0.0.2", 00:17:43.046 "trsvcid": "4420" 00:17:43.046 }, 00:17:43.046 "peer_address": { 00:17:43.046 "trtype": "TCP", 00:17:43.046 "adrfam": "IPv4", 00:17:43.046 "traddr": "10.0.0.1", 00:17:43.046 "trsvcid": "47976" 00:17:43.046 }, 00:17:43.046 "auth": { 00:17:43.046 "state": "completed", 00:17:43.046 "digest": "sha256", 00:17:43.046 "dhgroup": "ffdhe6144" 00:17:43.046 } 00:17:43.046 } 00:17:43.046 ]' 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.046 12:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.306 12:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.306 12:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.306 12:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.565 12:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:44.131 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.131 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:44.131 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.131 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.390 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.649 12:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.649 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.649 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.908 00:17:44.908 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.908 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.908 12:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.167 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.167 { 00:17:45.167 "cntlid": 35, 00:17:45.167 "qid": 0, 00:17:45.167 "state": "enabled", 00:17:45.167 "thread": "nvmf_tgt_poll_group_000", 00:17:45.167 "listen_address": { 00:17:45.167 "trtype": "TCP", 00:17:45.167 "adrfam": "IPv4", 00:17:45.167 "traddr": "10.0.0.2", 00:17:45.167 "trsvcid": "4420" 00:17:45.167 }, 00:17:45.168 "peer_address": { 00:17:45.168 "trtype": "TCP", 00:17:45.168 "adrfam": "IPv4", 00:17:45.168 "traddr": "10.0.0.1", 00:17:45.168 "trsvcid": "48000" 00:17:45.168 }, 00:17:45.168 "auth": { 00:17:45.168 "state": "completed", 00:17:45.168 "digest": "sha256", 00:17:45.168 "dhgroup": "ffdhe6144" 00:17:45.168 } 00:17:45.168 } 00:17:45.168 ]' 00:17:45.168 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.426 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.685 12:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.621 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.622 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.881 12:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.141 00:17:47.141 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.141 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.141 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.399 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.399 { 00:17:47.399 "cntlid": 37, 00:17:47.399 "qid": 0, 00:17:47.399 "state": "enabled", 00:17:47.399 "thread": "nvmf_tgt_poll_group_000", 00:17:47.399 "listen_address": { 00:17:47.399 "trtype": "TCP", 00:17:47.399 "adrfam": "IPv4", 00:17:47.399 "traddr": "10.0.0.2", 00:17:47.399 "trsvcid": "4420" 00:17:47.400 }, 00:17:47.400 "peer_address": { 00:17:47.400 "trtype": "TCP", 00:17:47.400 "adrfam": "IPv4", 00:17:47.400 "traddr": "10.0.0.1", 00:17:47.400 "trsvcid": "48038" 00:17:47.400 }, 00:17:47.400 "auth": { 00:17:47.400 "state": "completed", 00:17:47.400 "digest": "sha256", 00:17:47.400 "dhgroup": "ffdhe6144" 00:17:47.400 } 00:17:47.400 } 00:17:47.400 ]' 00:17:47.400 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.659 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.918 12:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:48.486 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.486 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:48.486 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.486 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.745 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.004 12:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.004 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.004 12:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.263 00:17:49.263 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.263 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.263 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.522 { 00:17:49.522 "cntlid": 39, 00:17:49.522 "qid": 0, 00:17:49.522 "state": "enabled", 00:17:49.522 "thread": "nvmf_tgt_poll_group_000", 00:17:49.522 "listen_address": { 00:17:49.522 "trtype": "TCP", 00:17:49.522 "adrfam": "IPv4", 00:17:49.522 "traddr": "10.0.0.2", 00:17:49.522 "trsvcid": "4420" 00:17:49.522 }, 00:17:49.522 "peer_address": { 00:17:49.522 "trtype": "TCP", 00:17:49.522 "adrfam": "IPv4", 00:17:49.522 "traddr": "10.0.0.1", 00:17:49.522 "trsvcid": "48068" 00:17:49.522 }, 00:17:49.522 "auth": { 00:17:49.522 "state": "completed", 00:17:49.522 "digest": "sha256", 00:17:49.522 "dhgroup": "ffdhe6144" 00:17:49.522 } 00:17:49.522 } 00:17:49.522 ]' 00:17:49.522 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.781 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.040 12:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.977 12:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.911 00:17:51.911 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.911 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.911 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.170 { 00:17:52.170 "cntlid": 41, 00:17:52.170 "qid": 0, 00:17:52.170 "state": "enabled", 00:17:52.170 "thread": "nvmf_tgt_poll_group_000", 00:17:52.170 "listen_address": { 00:17:52.170 "trtype": "TCP", 00:17:52.170 "adrfam": "IPv4", 00:17:52.170 "traddr": "10.0.0.2", 00:17:52.170 "trsvcid": "4420" 00:17:52.170 }, 00:17:52.170 "peer_address": { 00:17:52.170 "trtype": "TCP", 00:17:52.170 "adrfam": "IPv4", 00:17:52.170 "traddr": "10.0.0.1", 00:17:52.170 "trsvcid": "40972" 00:17:52.170 }, 00:17:52.170 "auth": { 00:17:52.170 "state": "completed", 00:17:52.170 "digest": "sha256", 00:17:52.170 "dhgroup": "ffdhe8192" 00:17:52.170 } 00:17:52.170 } 00:17:52.170 ]' 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.170 12:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.170 12:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.170 12:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.170 12:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.428 12:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.384 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.644 12:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.210 00:17:54.210 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.210 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.210 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.468 { 00:17:54.468 "cntlid": 43, 00:17:54.468 "qid": 0, 00:17:54.468 "state": "enabled", 00:17:54.468 "thread": "nvmf_tgt_poll_group_000", 00:17:54.468 "listen_address": { 00:17:54.468 "trtype": "TCP", 00:17:54.468 "adrfam": "IPv4", 00:17:54.468 "traddr": "10.0.0.2", 00:17:54.468 "trsvcid": "4420" 00:17:54.468 }, 00:17:54.468 "peer_address": { 00:17:54.468 "trtype": "TCP", 00:17:54.468 "adrfam": "IPv4", 00:17:54.468 "traddr": "10.0.0.1", 00:17:54.468 "trsvcid": "40992" 00:17:54.468 }, 00:17:54.468 "auth": { 00:17:54.468 "state": "completed", 00:17:54.468 "digest": "sha256", 00:17:54.468 "dhgroup": "ffdhe8192" 00:17:54.468 } 00:17:54.468 } 00:17:54.468 ]' 00:17:54.468 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.726 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.984 12:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.921 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.180 12:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.748 00:17:56.748 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.748 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.748 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.006 { 00:17:57.006 "cntlid": 45, 00:17:57.006 "qid": 0, 00:17:57.006 "state": "enabled", 00:17:57.006 "thread": "nvmf_tgt_poll_group_000", 00:17:57.006 "listen_address": { 00:17:57.006 "trtype": "TCP", 00:17:57.006 "adrfam": "IPv4", 00:17:57.006 "traddr": "10.0.0.2", 00:17:57.006 "trsvcid": "4420" 00:17:57.006 }, 00:17:57.006 "peer_address": { 00:17:57.006 "trtype": "TCP", 00:17:57.006 "adrfam": "IPv4", 00:17:57.006 "traddr": "10.0.0.1", 00:17:57.006 "trsvcid": "41024" 00:17:57.006 }, 00:17:57.006 "auth": { 00:17:57.006 "state": "completed", 00:17:57.006 "digest": "sha256", 00:17:57.006 "dhgroup": "ffdhe8192" 00:17:57.006 } 00:17:57.006 } 00:17:57.006 ]' 00:17:57.006 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.263 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.263 12:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.263 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.263 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.263 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.263 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.263 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.840 12:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.806 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.063 12:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.996 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.996 { 00:17:59.996 "cntlid": 47, 00:17:59.996 "qid": 0, 00:17:59.996 "state": "enabled", 00:17:59.996 "thread": "nvmf_tgt_poll_group_000", 00:17:59.996 "listen_address": { 00:17:59.996 "trtype": "TCP", 00:17:59.996 "adrfam": "IPv4", 00:17:59.996 "traddr": "10.0.0.2", 00:17:59.996 "trsvcid": "4420" 00:17:59.996 }, 00:17:59.996 "peer_address": { 00:17:59.996 "trtype": "TCP", 00:17:59.996 "adrfam": "IPv4", 00:17:59.996 "traddr": "10.0.0.1", 00:17:59.996 "trsvcid": "41048" 00:17:59.996 }, 00:17:59.996 "auth": { 00:17:59.996 "state": "completed", 00:17:59.996 "digest": "sha256", 00:17:59.996 "dhgroup": "ffdhe8192" 00:17:59.996 } 00:17:59.996 } 00:17:59.996 ]' 00:17:59.996 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.255 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.255 12:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.255 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.255 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.255 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.255 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.255 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.513 12:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.449 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.708 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.967 00:18:01.967 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.967 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.967 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.225 { 00:18:02.225 "cntlid": 49, 00:18:02.225 "qid": 0, 00:18:02.225 "state": "enabled", 00:18:02.225 "thread": "nvmf_tgt_poll_group_000", 00:18:02.225 "listen_address": { 00:18:02.225 "trtype": "TCP", 00:18:02.225 "adrfam": "IPv4", 00:18:02.225 "traddr": "10.0.0.2", 00:18:02.225 "trsvcid": "4420" 00:18:02.225 }, 00:18:02.225 "peer_address": { 00:18:02.225 "trtype": "TCP", 00:18:02.225 "adrfam": "IPv4", 00:18:02.225 "traddr": "10.0.0.1", 00:18:02.225 "trsvcid": "54784" 00:18:02.225 }, 00:18:02.225 "auth": { 00:18:02.225 "state": "completed", 00:18:02.225 "digest": "sha384", 00:18:02.225 "dhgroup": "null" 00:18:02.225 } 00:18:02.225 } 00:18:02.225 ]' 00:18:02.225 12:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.225 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.483 12:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.416 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.675 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.933 00:18:04.192 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.192 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.192 12:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.450 { 00:18:04.450 "cntlid": 51, 00:18:04.450 "qid": 0, 00:18:04.450 "state": "enabled", 00:18:04.450 "thread": "nvmf_tgt_poll_group_000", 00:18:04.450 "listen_address": { 00:18:04.450 "trtype": "TCP", 00:18:04.450 "adrfam": "IPv4", 00:18:04.450 "traddr": "10.0.0.2", 00:18:04.450 "trsvcid": "4420" 00:18:04.450 }, 00:18:04.450 "peer_address": { 00:18:04.450 "trtype": "TCP", 00:18:04.450 "adrfam": "IPv4", 00:18:04.450 "traddr": "10.0.0.1", 00:18:04.450 "trsvcid": "54816" 00:18:04.450 }, 00:18:04.450 "auth": { 00:18:04.450 "state": "completed", 00:18:04.450 "digest": "sha384", 00:18:04.450 "dhgroup": "null" 00:18:04.450 } 00:18:04.450 } 00:18:04.450 ]' 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.450 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.017 12:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.951 12:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.210 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.211 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.776 00:18:06.776 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.776 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.776 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.047 { 00:18:07.047 "cntlid": 53, 00:18:07.047 "qid": 0, 00:18:07.047 "state": "enabled", 00:18:07.047 "thread": "nvmf_tgt_poll_group_000", 00:18:07.047 "listen_address": { 00:18:07.047 "trtype": "TCP", 00:18:07.047 "adrfam": "IPv4", 00:18:07.047 "traddr": "10.0.0.2", 00:18:07.047 "trsvcid": "4420" 00:18:07.047 }, 00:18:07.047 "peer_address": { 00:18:07.047 "trtype": "TCP", 00:18:07.047 "adrfam": "IPv4", 00:18:07.047 "traddr": "10.0.0.1", 00:18:07.047 "trsvcid": "54834" 00:18:07.047 }, 00:18:07.047 "auth": { 00:18:07.047 "state": "completed", 00:18:07.047 "digest": "sha384", 00:18:07.047 "dhgroup": "null" 00:18:07.047 } 00:18:07.047 } 00:18:07.047 ]' 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.047 12:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.305 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.305 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.305 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.305 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.305 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.564 12:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.499 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.758 12:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.758 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.758 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.016 00:18:09.016 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.016 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.016 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.277 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.277 12:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.277 { 00:18:09.277 "cntlid": 55, 00:18:09.277 "qid": 0, 00:18:09.277 "state": "enabled", 00:18:09.277 "thread": "nvmf_tgt_poll_group_000", 00:18:09.277 "listen_address": { 00:18:09.277 "trtype": "TCP", 00:18:09.277 "adrfam": "IPv4", 00:18:09.277 "traddr": "10.0.0.2", 00:18:09.277 "trsvcid": "4420" 00:18:09.277 }, 00:18:09.277 "peer_address": { 00:18:09.277 "trtype": "TCP", 00:18:09.277 "adrfam": "IPv4", 00:18:09.277 "traddr": "10.0.0.1", 00:18:09.277 "trsvcid": "54856" 00:18:09.277 }, 00:18:09.277 "auth": { 00:18:09.277 "state": "completed", 00:18:09.277 "digest": "sha384", 00:18:09.277 "dhgroup": "null" 00:18:09.277 } 00:18:09.277 } 00:18:09.277 ]' 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.277 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.536 12:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.470 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.728 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.986 00:18:10.986 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.986 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.986 12:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.244 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.244 { 00:18:11.244 "cntlid": 57, 00:18:11.244 "qid": 0, 00:18:11.244 "state": "enabled", 00:18:11.244 "thread": "nvmf_tgt_poll_group_000", 00:18:11.244 "listen_address": { 00:18:11.244 "trtype": "TCP", 00:18:11.244 "adrfam": "IPv4", 00:18:11.245 "traddr": "10.0.0.2", 00:18:11.245 "trsvcid": "4420" 00:18:11.245 }, 00:18:11.245 "peer_address": { 00:18:11.245 "trtype": "TCP", 00:18:11.245 "adrfam": "IPv4", 00:18:11.245 "traddr": "10.0.0.1", 00:18:11.245 "trsvcid": "51724" 00:18:11.245 }, 00:18:11.245 "auth": { 00:18:11.245 "state": "completed", 00:18:11.245 "digest": "sha384", 00:18:11.245 "dhgroup": "ffdhe2048" 00:18:11.245 } 00:18:11.245 } 00:18:11.245 ]' 00:18:11.245 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.245 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.245 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.245 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.245 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.503 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.503 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.503 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.759 12:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.691 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.949 00:18:13.207 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.207 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.207 12:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.465 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.465 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.465 12:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.465 12:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.465 12:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.724 { 00:18:13.724 "cntlid": 59, 00:18:13.724 "qid": 0, 00:18:13.724 "state": "enabled", 00:18:13.724 "thread": "nvmf_tgt_poll_group_000", 00:18:13.724 "listen_address": { 00:18:13.724 "trtype": "TCP", 00:18:13.724 "adrfam": "IPv4", 00:18:13.724 "traddr": "10.0.0.2", 00:18:13.724 "trsvcid": "4420" 00:18:13.724 }, 00:18:13.724 "peer_address": { 00:18:13.724 "trtype": "TCP", 00:18:13.724 "adrfam": "IPv4", 00:18:13.724 "traddr": "10.0.0.1", 00:18:13.724 "trsvcid": "51754" 00:18:13.724 }, 00:18:13.724 "auth": { 00:18:13.724 "state": "completed", 00:18:13.724 "digest": "sha384", 00:18:13.724 "dhgroup": "ffdhe2048" 00:18:13.724 } 00:18:13.724 } 00:18:13.724 ]' 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.724 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.983 12:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.933 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.191 12:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.192 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.192 12:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.450 00:18:15.450 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.450 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.450 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.019 { 00:18:16.019 "cntlid": 61, 00:18:16.019 "qid": 0, 00:18:16.019 "state": "enabled", 00:18:16.019 "thread": "nvmf_tgt_poll_group_000", 00:18:16.019 "listen_address": { 00:18:16.019 "trtype": "TCP", 00:18:16.019 "adrfam": "IPv4", 00:18:16.019 "traddr": "10.0.0.2", 00:18:16.019 "trsvcid": "4420" 00:18:16.019 }, 00:18:16.019 "peer_address": { 00:18:16.019 "trtype": "TCP", 00:18:16.019 "adrfam": "IPv4", 00:18:16.019 "traddr": "10.0.0.1", 00:18:16.019 "trsvcid": "51790" 00:18:16.019 }, 00:18:16.019 "auth": { 00:18:16.019 "state": "completed", 00:18:16.019 "digest": "sha384", 00:18:16.019 "dhgroup": "ffdhe2048" 00:18:16.019 } 00:18:16.019 } 00:18:16.019 ]' 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.019 12:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.278 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.216 12:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.785 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.044 00:18:18.044 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.044 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.044 12:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.304 { 00:18:18.304 "cntlid": 63, 00:18:18.304 "qid": 0, 00:18:18.304 "state": "enabled", 00:18:18.304 "thread": "nvmf_tgt_poll_group_000", 00:18:18.304 "listen_address": { 00:18:18.304 "trtype": "TCP", 00:18:18.304 "adrfam": "IPv4", 00:18:18.304 "traddr": "10.0.0.2", 00:18:18.304 "trsvcid": "4420" 00:18:18.304 }, 00:18:18.304 "peer_address": { 00:18:18.304 "trtype": "TCP", 00:18:18.304 "adrfam": "IPv4", 00:18:18.304 "traddr": "10.0.0.1", 00:18:18.304 "trsvcid": "51814" 00:18:18.304 }, 00:18:18.304 "auth": { 00:18:18.304 "state": "completed", 00:18:18.304 "digest": "sha384", 00:18:18.304 "dhgroup": "ffdhe2048" 00:18:18.304 } 00:18:18.304 } 00:18:18.304 ]' 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.304 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.564 12:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:19.501 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.069 12:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.328 00:18:20.328 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.328 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.328 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.587 { 00:18:20.587 "cntlid": 65, 00:18:20.587 "qid": 0, 00:18:20.587 "state": "enabled", 00:18:20.587 "thread": "nvmf_tgt_poll_group_000", 00:18:20.587 "listen_address": { 00:18:20.587 "trtype": "TCP", 00:18:20.587 "adrfam": "IPv4", 00:18:20.587 "traddr": "10.0.0.2", 00:18:20.587 "trsvcid": "4420" 00:18:20.587 }, 00:18:20.587 "peer_address": { 00:18:20.587 "trtype": "TCP", 00:18:20.587 "adrfam": "IPv4", 00:18:20.587 "traddr": "10.0.0.1", 00:18:20.587 "trsvcid": "48916" 00:18:20.587 }, 00:18:20.587 "auth": { 00:18:20.587 "state": "completed", 00:18:20.587 "digest": "sha384", 00:18:20.587 "dhgroup": "ffdhe3072" 00:18:20.587 } 00:18:20.587 } 00:18:20.587 ]' 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.587 12:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.153 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.089 12:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.658 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.918 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.918 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.177 { 00:18:23.177 "cntlid": 67, 00:18:23.177 "qid": 0, 00:18:23.177 "state": "enabled", 00:18:23.177 "thread": "nvmf_tgt_poll_group_000", 00:18:23.177 "listen_address": { 00:18:23.177 "trtype": "TCP", 00:18:23.177 "adrfam": "IPv4", 00:18:23.177 "traddr": "10.0.0.2", 00:18:23.177 "trsvcid": "4420" 00:18:23.177 }, 00:18:23.177 "peer_address": { 00:18:23.177 "trtype": "TCP", 00:18:23.177 "adrfam": "IPv4", 00:18:23.177 "traddr": "10.0.0.1", 00:18:23.177 "trsvcid": "48940" 00:18:23.177 }, 00:18:23.177 "auth": { 00:18:23.177 "state": "completed", 00:18:23.177 "digest": "sha384", 00:18:23.177 "dhgroup": "ffdhe3072" 00:18:23.177 } 00:18:23.177 } 00:18:23.177 ]' 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.177 12:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.435 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.371 12:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.371 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.635 00:18:24.896 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.896 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.896 12:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.155 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.155 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.155 12:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.155 12:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.413 { 00:18:25.413 "cntlid": 69, 00:18:25.413 "qid": 0, 00:18:25.413 "state": "enabled", 00:18:25.413 "thread": "nvmf_tgt_poll_group_000", 00:18:25.413 "listen_address": { 00:18:25.413 "trtype": "TCP", 00:18:25.413 "adrfam": "IPv4", 00:18:25.413 "traddr": "10.0.0.2", 00:18:25.413 "trsvcid": "4420" 00:18:25.413 }, 00:18:25.413 "peer_address": { 00:18:25.413 "trtype": "TCP", 00:18:25.413 "adrfam": "IPv4", 00:18:25.413 "traddr": "10.0.0.1", 00:18:25.413 "trsvcid": "48978" 00:18:25.413 }, 00:18:25.413 "auth": { 00:18:25.413 "state": "completed", 00:18:25.413 "digest": "sha384", 00:18:25.413 "dhgroup": "ffdhe3072" 00:18:25.413 } 00:18:25.413 } 00:18:25.413 ]' 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.413 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.671 12:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.608 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.866 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.124 00:18:27.124 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.124 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.124 12:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.382 { 00:18:27.382 "cntlid": 71, 00:18:27.382 "qid": 0, 00:18:27.382 "state": "enabled", 00:18:27.382 "thread": "nvmf_tgt_poll_group_000", 00:18:27.382 "listen_address": { 00:18:27.382 "trtype": "TCP", 00:18:27.382 "adrfam": "IPv4", 00:18:27.382 "traddr": "10.0.0.2", 00:18:27.382 "trsvcid": "4420" 00:18:27.382 }, 00:18:27.382 "peer_address": { 00:18:27.382 "trtype": "TCP", 00:18:27.382 "adrfam": "IPv4", 00:18:27.382 "traddr": "10.0.0.1", 00:18:27.382 "trsvcid": "49020" 00:18:27.382 }, 00:18:27.382 "auth": { 00:18:27.382 "state": "completed", 00:18:27.382 "digest": "sha384", 00:18:27.382 "dhgroup": "ffdhe3072" 00:18:27.382 } 00:18:27.382 } 00:18:27.382 ]' 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.382 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.641 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.641 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.641 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.900 12:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.872 12:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.157 00:18:29.157 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.157 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.157 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.416 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.416 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.416 12:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.416 12:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.675 { 00:18:29.675 "cntlid": 73, 00:18:29.675 "qid": 0, 00:18:29.675 "state": "enabled", 00:18:29.675 "thread": "nvmf_tgt_poll_group_000", 00:18:29.675 "listen_address": { 00:18:29.675 "trtype": "TCP", 00:18:29.675 "adrfam": "IPv4", 00:18:29.675 "traddr": "10.0.0.2", 00:18:29.675 "trsvcid": "4420" 00:18:29.675 }, 00:18:29.675 "peer_address": { 00:18:29.675 "trtype": "TCP", 00:18:29.675 "adrfam": "IPv4", 00:18:29.675 "traddr": "10.0.0.1", 00:18:29.675 "trsvcid": "49042" 00:18:29.675 }, 00:18:29.675 "auth": { 00:18:29.675 "state": "completed", 00:18:29.675 "digest": "sha384", 00:18:29.675 "dhgroup": "ffdhe4096" 00:18:29.675 } 00:18:29.675 } 00:18:29.675 ]' 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.675 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.934 12:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.871 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.129 12:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.387 00:18:31.387 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.387 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.387 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.644 { 00:18:31.644 "cntlid": 75, 00:18:31.644 "qid": 0, 00:18:31.644 "state": "enabled", 00:18:31.644 "thread": "nvmf_tgt_poll_group_000", 00:18:31.644 "listen_address": { 00:18:31.644 "trtype": "TCP", 00:18:31.644 "adrfam": "IPv4", 00:18:31.644 "traddr": "10.0.0.2", 00:18:31.644 "trsvcid": "4420" 00:18:31.644 }, 00:18:31.644 "peer_address": { 00:18:31.644 "trtype": "TCP", 00:18:31.644 "adrfam": "IPv4", 00:18:31.644 "traddr": "10.0.0.1", 00:18:31.644 "trsvcid": "54116" 00:18:31.644 }, 00:18:31.644 "auth": { 00:18:31.644 "state": "completed", 00:18:31.644 "digest": "sha384", 00:18:31.644 "dhgroup": "ffdhe4096" 00:18:31.644 } 00:18:31.644 } 00:18:31.644 ]' 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.644 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.901 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.901 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.901 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.159 12:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:32.724 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.724 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:32.724 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.724 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.982 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.240 12:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.240 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.240 12:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.497 00:18:33.497 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.497 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.497 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.755 { 00:18:33.755 "cntlid": 77, 00:18:33.755 "qid": 0, 00:18:33.755 "state": "enabled", 00:18:33.755 "thread": "nvmf_tgt_poll_group_000", 00:18:33.755 "listen_address": { 00:18:33.755 "trtype": "TCP", 00:18:33.755 "adrfam": "IPv4", 00:18:33.755 "traddr": "10.0.0.2", 00:18:33.755 "trsvcid": "4420" 00:18:33.755 }, 00:18:33.755 "peer_address": { 00:18:33.755 "trtype": "TCP", 00:18:33.755 "adrfam": "IPv4", 00:18:33.755 "traddr": "10.0.0.1", 00:18:33.755 "trsvcid": "54142" 00:18:33.755 }, 00:18:33.755 "auth": { 00:18:33.755 "state": "completed", 00:18:33.755 "digest": "sha384", 00:18:33.755 "dhgroup": "ffdhe4096" 00:18:33.755 } 00:18:33.755 } 00:18:33.755 ]' 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.755 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.012 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.013 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.013 12:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.270 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:35.205 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.205 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:35.205 12:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.205 12:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.206 12:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.206 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.206 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.206 12:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.206 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.774 00:18:35.774 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.774 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.774 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.033 { 00:18:36.033 "cntlid": 79, 00:18:36.033 "qid": 0, 00:18:36.033 "state": "enabled", 00:18:36.033 "thread": "nvmf_tgt_poll_group_000", 00:18:36.033 "listen_address": { 00:18:36.033 "trtype": "TCP", 00:18:36.033 "adrfam": "IPv4", 00:18:36.033 "traddr": "10.0.0.2", 00:18:36.033 "trsvcid": "4420" 00:18:36.033 }, 00:18:36.033 "peer_address": { 00:18:36.033 "trtype": "TCP", 00:18:36.033 "adrfam": "IPv4", 00:18:36.033 "traddr": "10.0.0.1", 00:18:36.033 "trsvcid": "54162" 00:18:36.033 }, 00:18:36.033 "auth": { 00:18:36.033 "state": "completed", 00:18:36.033 "digest": "sha384", 00:18:36.033 "dhgroup": "ffdhe4096" 00:18:36.033 } 00:18:36.033 } 00:18:36.033 ]' 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.033 12:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.292 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:37.228 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.228 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.229 12:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.488 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.056 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.056 12:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.315 { 00:18:38.315 "cntlid": 81, 00:18:38.315 "qid": 0, 00:18:38.315 "state": "enabled", 00:18:38.315 "thread": "nvmf_tgt_poll_group_000", 00:18:38.315 "listen_address": { 00:18:38.315 "trtype": "TCP", 00:18:38.315 "adrfam": "IPv4", 00:18:38.315 "traddr": "10.0.0.2", 00:18:38.315 "trsvcid": "4420" 00:18:38.315 }, 00:18:38.315 "peer_address": { 00:18:38.315 "trtype": "TCP", 00:18:38.315 "adrfam": "IPv4", 00:18:38.315 "traddr": "10.0.0.1", 00:18:38.315 "trsvcid": "54200" 00:18:38.315 }, 00:18:38.315 "auth": { 00:18:38.315 "state": "completed", 00:18:38.315 "digest": "sha384", 00:18:38.315 "dhgroup": "ffdhe6144" 00:18:38.315 } 00:18:38.315 } 00:18:38.315 ]' 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.315 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.883 12:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.818 12:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.385 00:18:40.385 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.385 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.385 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.952 { 00:18:40.952 "cntlid": 83, 00:18:40.952 "qid": 0, 00:18:40.952 "state": "enabled", 00:18:40.952 "thread": "nvmf_tgt_poll_group_000", 00:18:40.952 "listen_address": { 00:18:40.952 "trtype": "TCP", 00:18:40.952 "adrfam": "IPv4", 00:18:40.952 "traddr": "10.0.0.2", 00:18:40.952 "trsvcid": "4420" 00:18:40.952 }, 00:18:40.952 "peer_address": { 00:18:40.952 "trtype": "TCP", 00:18:40.952 "adrfam": "IPv4", 00:18:40.952 "traddr": "10.0.0.1", 00:18:40.952 "trsvcid": "50526" 00:18:40.953 }, 00:18:40.953 "auth": { 00:18:40.953 "state": "completed", 00:18:40.953 "digest": "sha384", 00:18:40.953 "dhgroup": "ffdhe6144" 00:18:40.953 } 00:18:40.953 } 00:18:40.953 ]' 00:18:40.953 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.953 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.953 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.953 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.953 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.212 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.212 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.212 12:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.477 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.043 12:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.299 12:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.556 12:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.556 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.556 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.814 00:18:42.814 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.814 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.814 12:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.380 { 00:18:43.380 "cntlid": 85, 00:18:43.380 "qid": 0, 00:18:43.380 "state": "enabled", 00:18:43.380 "thread": "nvmf_tgt_poll_group_000", 00:18:43.380 "listen_address": { 00:18:43.380 "trtype": "TCP", 00:18:43.380 "adrfam": "IPv4", 00:18:43.380 "traddr": "10.0.0.2", 00:18:43.380 "trsvcid": "4420" 00:18:43.380 }, 00:18:43.380 "peer_address": { 00:18:43.380 "trtype": "TCP", 00:18:43.380 "adrfam": "IPv4", 00:18:43.380 "traddr": "10.0.0.1", 00:18:43.380 "trsvcid": "50550" 00:18:43.380 }, 00:18:43.380 "auth": { 00:18:43.380 "state": "completed", 00:18:43.380 "digest": "sha384", 00:18:43.380 "dhgroup": "ffdhe6144" 00:18:43.380 } 00:18:43.380 } 00:18:43.380 ]' 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.380 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.637 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.637 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.637 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.637 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.637 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.896 12:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:44.831 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.832 12:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.399 00:18:45.399 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.399 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.399 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.965 { 00:18:45.965 "cntlid": 87, 00:18:45.965 "qid": 0, 00:18:45.965 "state": "enabled", 00:18:45.965 "thread": "nvmf_tgt_poll_group_000", 00:18:45.965 "listen_address": { 00:18:45.965 "trtype": "TCP", 00:18:45.965 "adrfam": "IPv4", 00:18:45.965 "traddr": "10.0.0.2", 00:18:45.965 "trsvcid": "4420" 00:18:45.965 }, 00:18:45.965 "peer_address": { 00:18:45.965 "trtype": "TCP", 00:18:45.965 "adrfam": "IPv4", 00:18:45.965 "traddr": "10.0.0.1", 00:18:45.965 "trsvcid": "50582" 00:18:45.965 }, 00:18:45.965 "auth": { 00:18:45.965 "state": "completed", 00:18:45.965 "digest": "sha384", 00:18:45.965 "dhgroup": "ffdhe6144" 00:18:45.965 } 00:18:45.965 } 00:18:45.965 ]' 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.965 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.224 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.224 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.224 12:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.482 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:47.049 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.049 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:47.049 12:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.049 12:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.307 12:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.307 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.307 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.307 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.307 12:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.566 12:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.132 00:18:48.132 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.132 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.132 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.699 { 00:18:48.699 "cntlid": 89, 00:18:48.699 "qid": 0, 00:18:48.699 "state": "enabled", 00:18:48.699 "thread": "nvmf_tgt_poll_group_000", 00:18:48.699 "listen_address": { 00:18:48.699 "trtype": "TCP", 00:18:48.699 "adrfam": "IPv4", 00:18:48.699 "traddr": "10.0.0.2", 00:18:48.699 "trsvcid": "4420" 00:18:48.699 }, 00:18:48.699 "peer_address": { 00:18:48.699 "trtype": "TCP", 00:18:48.699 "adrfam": "IPv4", 00:18:48.699 "traddr": "10.0.0.1", 00:18:48.699 "trsvcid": "50612" 00:18:48.699 }, 00:18:48.699 "auth": { 00:18:48.699 "state": "completed", 00:18:48.699 "digest": "sha384", 00:18:48.699 "dhgroup": "ffdhe8192" 00:18:48.699 } 00:18:48.699 } 00:18:48.699 ]' 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.699 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.956 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.956 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.956 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.956 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.956 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.215 12:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.150 12:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.409 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.346 00:18:51.346 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.346 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.346 12:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.346 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.346 { 00:18:51.346 "cntlid": 91, 00:18:51.346 "qid": 0, 00:18:51.347 "state": "enabled", 00:18:51.347 "thread": "nvmf_tgt_poll_group_000", 00:18:51.347 "listen_address": { 00:18:51.347 "trtype": "TCP", 00:18:51.347 "adrfam": "IPv4", 00:18:51.347 "traddr": "10.0.0.2", 00:18:51.347 "trsvcid": "4420" 00:18:51.347 }, 00:18:51.347 "peer_address": { 00:18:51.347 "trtype": "TCP", 00:18:51.347 "adrfam": "IPv4", 00:18:51.347 "traddr": "10.0.0.1", 00:18:51.347 "trsvcid": "34080" 00:18:51.347 }, 00:18:51.347 "auth": { 00:18:51.347 "state": "completed", 00:18:51.347 "digest": "sha384", 00:18:51.347 "dhgroup": "ffdhe8192" 00:18:51.347 } 00:18:51.347 } 00:18:51.347 ]' 00:18:51.347 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.347 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.347 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.605 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.605 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.605 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.605 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.605 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.864 12:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.242 12:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.501 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.502 12:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.502 12:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.502 12:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.502 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.502 12:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.440 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.440 12:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.699 { 00:18:54.699 "cntlid": 93, 00:18:54.699 "qid": 0, 00:18:54.699 "state": "enabled", 00:18:54.699 "thread": "nvmf_tgt_poll_group_000", 00:18:54.699 "listen_address": { 00:18:54.699 "trtype": "TCP", 00:18:54.699 "adrfam": "IPv4", 00:18:54.699 "traddr": "10.0.0.2", 00:18:54.699 "trsvcid": "4420" 00:18:54.699 }, 00:18:54.699 "peer_address": { 00:18:54.699 "trtype": "TCP", 00:18:54.699 "adrfam": "IPv4", 00:18:54.699 "traddr": "10.0.0.1", 00:18:54.699 "trsvcid": "34124" 00:18:54.699 }, 00:18:54.699 "auth": { 00:18:54.699 "state": "completed", 00:18:54.699 "digest": "sha384", 00:18:54.699 "dhgroup": "ffdhe8192" 00:18:54.699 } 00:18:54.699 } 00:18:54.699 ]' 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.699 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.958 12:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.896 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.156 12:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.724 00:18:56.724 12:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.724 12:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.724 12:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.292 { 00:18:57.292 "cntlid": 95, 00:18:57.292 "qid": 0, 00:18:57.292 "state": "enabled", 00:18:57.292 "thread": "nvmf_tgt_poll_group_000", 00:18:57.292 "listen_address": { 00:18:57.292 "trtype": "TCP", 00:18:57.292 "adrfam": "IPv4", 00:18:57.292 "traddr": "10.0.0.2", 00:18:57.292 "trsvcid": "4420" 00:18:57.292 }, 00:18:57.292 "peer_address": { 00:18:57.292 "trtype": "TCP", 00:18:57.292 "adrfam": "IPv4", 00:18:57.292 "traddr": "10.0.0.1", 00:18:57.292 "trsvcid": "34138" 00:18:57.292 }, 00:18:57.292 "auth": { 00:18:57.292 "state": "completed", 00:18:57.292 "digest": "sha384", 00:18:57.292 "dhgroup": "ffdhe8192" 00:18:57.292 } 00:18:57.292 } 00:18:57.292 ]' 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.292 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.550 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.550 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.550 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.550 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.550 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.809 12:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.838 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.130 00:18:59.130 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.130 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.130 12:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.698 { 00:18:59.698 "cntlid": 97, 00:18:59.698 "qid": 0, 00:18:59.698 "state": "enabled", 00:18:59.698 "thread": "nvmf_tgt_poll_group_000", 00:18:59.698 "listen_address": { 00:18:59.698 "trtype": "TCP", 00:18:59.698 "adrfam": "IPv4", 00:18:59.698 "traddr": "10.0.0.2", 00:18:59.698 "trsvcid": "4420" 00:18:59.698 }, 00:18:59.698 "peer_address": { 00:18:59.698 "trtype": "TCP", 00:18:59.698 "adrfam": "IPv4", 00:18:59.698 "traddr": "10.0.0.1", 00:18:59.698 "trsvcid": "34166" 00:18:59.698 }, 00:18:59.698 "auth": { 00:18:59.698 "state": "completed", 00:18:59.698 "digest": "sha512", 00:18:59.698 "dhgroup": "null" 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ]' 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.698 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.957 12:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.893 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.152 12:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.411 00:19:01.412 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.412 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.412 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.670 { 00:19:01.670 "cntlid": 99, 00:19:01.670 "qid": 0, 00:19:01.670 "state": "enabled", 00:19:01.670 "thread": "nvmf_tgt_poll_group_000", 00:19:01.670 "listen_address": { 00:19:01.670 "trtype": "TCP", 00:19:01.670 "adrfam": "IPv4", 00:19:01.670 "traddr": "10.0.0.2", 00:19:01.670 "trsvcid": "4420" 00:19:01.670 }, 00:19:01.670 "peer_address": { 00:19:01.670 "trtype": "TCP", 00:19:01.670 "adrfam": "IPv4", 00:19:01.670 "traddr": "10.0.0.1", 00:19:01.670 "trsvcid": "38652" 00:19:01.670 }, 00:19:01.670 "auth": { 00:19:01.670 "state": "completed", 00:19:01.670 "digest": "sha512", 00:19:01.670 "dhgroup": "null" 00:19:01.670 } 00:19:01.670 } 00:19:01.670 ]' 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.670 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.929 12:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.865 12:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.123 00:19:03.123 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.123 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.123 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.690 { 00:19:03.690 "cntlid": 101, 00:19:03.690 "qid": 0, 00:19:03.690 "state": "enabled", 00:19:03.690 "thread": "nvmf_tgt_poll_group_000", 00:19:03.690 "listen_address": { 00:19:03.690 "trtype": "TCP", 00:19:03.690 "adrfam": "IPv4", 00:19:03.690 "traddr": "10.0.0.2", 00:19:03.690 "trsvcid": "4420" 00:19:03.690 }, 00:19:03.690 "peer_address": { 00:19:03.690 "trtype": "TCP", 00:19:03.690 "adrfam": "IPv4", 00:19:03.690 "traddr": "10.0.0.1", 00:19:03.690 "trsvcid": "38676" 00:19:03.690 }, 00:19:03.690 "auth": { 00:19:03.690 "state": "completed", 00:19:03.690 "digest": "sha512", 00:19:03.690 "dhgroup": "null" 00:19:03.690 } 00:19:03.690 } 00:19:03.690 ]' 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.690 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.950 12:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.887 12:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.145 00:19:05.145 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.145 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.145 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.404 { 00:19:05.404 "cntlid": 103, 00:19:05.404 "qid": 0, 00:19:05.404 "state": "enabled", 00:19:05.404 "thread": "nvmf_tgt_poll_group_000", 00:19:05.404 "listen_address": { 00:19:05.404 "trtype": "TCP", 00:19:05.404 "adrfam": "IPv4", 00:19:05.404 "traddr": "10.0.0.2", 00:19:05.404 "trsvcid": "4420" 00:19:05.404 }, 00:19:05.404 "peer_address": { 00:19:05.404 "trtype": "TCP", 00:19:05.404 "adrfam": "IPv4", 00:19:05.404 "traddr": "10.0.0.1", 00:19:05.404 "trsvcid": "38690" 00:19:05.404 }, 00:19:05.404 "auth": { 00:19:05.404 "state": "completed", 00:19:05.404 "digest": "sha512", 00:19:05.404 "dhgroup": "null" 00:19:05.404 } 00:19:05.404 } 00:19:05.404 ]' 00:19:05.404 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.663 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.922 12:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.861 12:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.120 00:19:07.379 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.379 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.379 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.637 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.637 { 00:19:07.638 "cntlid": 105, 00:19:07.638 "qid": 0, 00:19:07.638 "state": "enabled", 00:19:07.638 "thread": "nvmf_tgt_poll_group_000", 00:19:07.638 "listen_address": { 00:19:07.638 "trtype": "TCP", 00:19:07.638 "adrfam": "IPv4", 00:19:07.638 "traddr": "10.0.0.2", 00:19:07.638 "trsvcid": "4420" 00:19:07.638 }, 00:19:07.638 "peer_address": { 00:19:07.638 "trtype": "TCP", 00:19:07.638 "adrfam": "IPv4", 00:19:07.638 "traddr": "10.0.0.1", 00:19:07.638 "trsvcid": "38704" 00:19:07.638 }, 00:19:07.638 "auth": { 00:19:07.638 "state": "completed", 00:19:07.638 "digest": "sha512", 00:19:07.638 "dhgroup": "ffdhe2048" 00:19:07.638 } 00:19:07.638 } 00:19:07.638 ]' 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.896 12:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.155 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.092 12:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.092 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.350 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.608 00:19:09.608 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.608 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.608 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.174 { 00:19:10.174 "cntlid": 107, 00:19:10.174 "qid": 0, 00:19:10.174 "state": "enabled", 00:19:10.174 "thread": "nvmf_tgt_poll_group_000", 00:19:10.174 "listen_address": { 00:19:10.174 "trtype": "TCP", 00:19:10.174 "adrfam": "IPv4", 00:19:10.174 "traddr": "10.0.0.2", 00:19:10.174 "trsvcid": "4420" 00:19:10.174 }, 00:19:10.174 "peer_address": { 00:19:10.174 "trtype": "TCP", 00:19:10.174 "adrfam": "IPv4", 00:19:10.174 "traddr": "10.0.0.1", 00:19:10.174 "trsvcid": "38726" 00:19:10.174 }, 00:19:10.174 "auth": { 00:19:10.174 "state": "completed", 00:19:10.174 "digest": "sha512", 00:19:10.174 "dhgroup": "ffdhe2048" 00:19:10.174 } 00:19:10.174 } 00:19:10.174 ]' 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.174 12:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.431 12:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.365 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.623 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.881 00:19:11.881 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.881 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.881 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.140 { 00:19:12.140 "cntlid": 109, 00:19:12.140 "qid": 0, 00:19:12.140 "state": "enabled", 00:19:12.140 "thread": "nvmf_tgt_poll_group_000", 00:19:12.140 "listen_address": { 00:19:12.140 "trtype": "TCP", 00:19:12.140 "adrfam": "IPv4", 00:19:12.140 "traddr": "10.0.0.2", 00:19:12.140 "trsvcid": "4420" 00:19:12.140 }, 00:19:12.140 "peer_address": { 00:19:12.140 "trtype": "TCP", 00:19:12.140 "adrfam": "IPv4", 00:19:12.140 "traddr": "10.0.0.1", 00:19:12.140 "trsvcid": "42882" 00:19:12.140 }, 00:19:12.140 "auth": { 00:19:12.140 "state": "completed", 00:19:12.140 "digest": "sha512", 00:19:12.140 "dhgroup": "ffdhe2048" 00:19:12.140 } 00:19:12.140 } 00:19:12.140 ]' 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.140 12:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.140 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.140 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.140 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.140 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.140 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.399 12:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.334 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.593 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.852 00:19:13.852 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.852 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.852 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.112 { 00:19:14.112 "cntlid": 111, 00:19:14.112 "qid": 0, 00:19:14.112 "state": "enabled", 00:19:14.112 "thread": "nvmf_tgt_poll_group_000", 00:19:14.112 "listen_address": { 00:19:14.112 "trtype": "TCP", 00:19:14.112 "adrfam": "IPv4", 00:19:14.112 "traddr": "10.0.0.2", 00:19:14.112 "trsvcid": "4420" 00:19:14.112 }, 00:19:14.112 "peer_address": { 00:19:14.112 "trtype": "TCP", 00:19:14.112 "adrfam": "IPv4", 00:19:14.112 "traddr": "10.0.0.1", 00:19:14.112 "trsvcid": "42902" 00:19:14.112 }, 00:19:14.112 "auth": { 00:19:14.112 "state": "completed", 00:19:14.112 "digest": "sha512", 00:19:14.112 "dhgroup": "ffdhe2048" 00:19:14.112 } 00:19:14.112 } 00:19:14.112 ]' 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.112 12:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.112 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.112 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.371 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.371 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.371 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.630 12:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.567 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.134 12:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.392 00:19:16.392 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.392 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.392 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.651 { 00:19:16.651 "cntlid": 113, 00:19:16.651 "qid": 0, 00:19:16.651 "state": "enabled", 00:19:16.651 "thread": "nvmf_tgt_poll_group_000", 00:19:16.651 "listen_address": { 00:19:16.651 "trtype": "TCP", 00:19:16.651 "adrfam": "IPv4", 00:19:16.651 "traddr": "10.0.0.2", 00:19:16.651 "trsvcid": "4420" 00:19:16.651 }, 00:19:16.651 "peer_address": { 00:19:16.651 "trtype": "TCP", 00:19:16.651 "adrfam": "IPv4", 00:19:16.651 "traddr": "10.0.0.1", 00:19:16.651 "trsvcid": "42936" 00:19:16.651 }, 00:19:16.651 "auth": { 00:19:16.651 "state": "completed", 00:19:16.651 "digest": "sha512", 00:19:16.651 "dhgroup": "ffdhe3072" 00:19:16.651 } 00:19:16.651 } 00:19:16.651 ]' 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.651 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.909 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.909 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.909 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.168 12:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:17.735 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.993 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.251 12:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.252 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.252 12:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.509 00:19:18.509 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.509 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.509 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.075 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.075 { 00:19:19.075 "cntlid": 115, 00:19:19.075 "qid": 0, 00:19:19.075 "state": "enabled", 00:19:19.075 "thread": "nvmf_tgt_poll_group_000", 00:19:19.075 "listen_address": { 00:19:19.075 "trtype": "TCP", 00:19:19.075 "adrfam": "IPv4", 00:19:19.075 "traddr": "10.0.0.2", 00:19:19.075 "trsvcid": "4420" 00:19:19.075 }, 00:19:19.076 "peer_address": { 00:19:19.076 "trtype": "TCP", 00:19:19.076 "adrfam": "IPv4", 00:19:19.076 "traddr": "10.0.0.1", 00:19:19.076 "trsvcid": "42970" 00:19:19.076 }, 00:19:19.076 "auth": { 00:19:19.076 "state": "completed", 00:19:19.076 "digest": "sha512", 00:19:19.076 "dhgroup": "ffdhe3072" 00:19:19.076 } 00:19:19.076 } 00:19:19.076 ]' 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.076 12:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.333 12:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:20.267 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.525 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.784 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.042 00:19:21.042 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.042 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.042 12:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.300 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.300 { 00:19:21.300 "cntlid": 117, 00:19:21.300 "qid": 0, 00:19:21.300 "state": "enabled", 00:19:21.300 "thread": "nvmf_tgt_poll_group_000", 00:19:21.300 "listen_address": { 00:19:21.300 "trtype": "TCP", 00:19:21.300 "adrfam": "IPv4", 00:19:21.301 "traddr": "10.0.0.2", 00:19:21.301 "trsvcid": "4420" 00:19:21.301 }, 00:19:21.301 "peer_address": { 00:19:21.301 "trtype": "TCP", 00:19:21.301 "adrfam": "IPv4", 00:19:21.301 "traddr": "10.0.0.1", 00:19:21.301 "trsvcid": "48074" 00:19:21.301 }, 00:19:21.301 "auth": { 00:19:21.301 "state": "completed", 00:19:21.301 "digest": "sha512", 00:19:21.301 "dhgroup": "ffdhe3072" 00:19:21.301 } 00:19:21.301 } 00:19:21.301 ]' 00:19:21.301 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.301 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.301 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.301 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.301 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.559 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.559 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.559 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.822 12:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.390 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.646 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.212 00:19:23.212 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.212 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.212 12:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.470 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.470 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.470 12:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.470 12:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.728 12:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.728 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.728 { 00:19:23.728 "cntlid": 119, 00:19:23.728 "qid": 0, 00:19:23.728 "state": "enabled", 00:19:23.728 "thread": "nvmf_tgt_poll_group_000", 00:19:23.728 "listen_address": { 00:19:23.728 "trtype": "TCP", 00:19:23.728 "adrfam": "IPv4", 00:19:23.728 "traddr": "10.0.0.2", 00:19:23.728 "trsvcid": "4420" 00:19:23.728 }, 00:19:23.728 "peer_address": { 00:19:23.728 "trtype": "TCP", 00:19:23.728 "adrfam": "IPv4", 00:19:23.728 "traddr": "10.0.0.1", 00:19:23.728 "trsvcid": "48104" 00:19:23.728 }, 00:19:23.729 "auth": { 00:19:23.729 "state": "completed", 00:19:23.729 "digest": "sha512", 00:19:23.729 "dhgroup": "ffdhe3072" 00:19:23.729 } 00:19:23.729 } 00:19:23.729 ]' 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.729 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.987 12:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.921 12:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.488 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.488 00:19:25.746 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.746 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.746 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.005 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.005 { 00:19:26.005 "cntlid": 121, 00:19:26.005 "qid": 0, 00:19:26.005 "state": "enabled", 00:19:26.005 "thread": "nvmf_tgt_poll_group_000", 00:19:26.005 "listen_address": { 00:19:26.005 "trtype": "TCP", 00:19:26.005 "adrfam": "IPv4", 00:19:26.005 "traddr": "10.0.0.2", 00:19:26.005 "trsvcid": "4420" 00:19:26.005 }, 00:19:26.005 "peer_address": { 00:19:26.005 "trtype": "TCP", 00:19:26.006 "adrfam": "IPv4", 00:19:26.006 "traddr": "10.0.0.1", 00:19:26.006 "trsvcid": "48140" 00:19:26.006 }, 00:19:26.006 "auth": { 00:19:26.006 "state": "completed", 00:19:26.006 "digest": "sha512", 00:19:26.006 "dhgroup": "ffdhe4096" 00:19:26.006 } 00:19:26.006 } 00:19:26.006 ]' 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.006 12:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.570 12:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.504 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.761 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:27.761 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.761 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.761 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.761 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.762 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.019 00:19:28.019 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.019 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.019 12:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.278 { 00:19:28.278 "cntlid": 123, 00:19:28.278 "qid": 0, 00:19:28.278 "state": "enabled", 00:19:28.278 "thread": "nvmf_tgt_poll_group_000", 00:19:28.278 "listen_address": { 00:19:28.278 "trtype": "TCP", 00:19:28.278 "adrfam": "IPv4", 00:19:28.278 "traddr": "10.0.0.2", 00:19:28.278 "trsvcid": "4420" 00:19:28.278 }, 00:19:28.278 "peer_address": { 00:19:28.278 "trtype": "TCP", 00:19:28.278 "adrfam": "IPv4", 00:19:28.278 "traddr": "10.0.0.1", 00:19:28.278 "trsvcid": "48170" 00:19:28.278 }, 00:19:28.278 "auth": { 00:19:28.278 "state": "completed", 00:19:28.278 "digest": "sha512", 00:19:28.278 "dhgroup": "ffdhe4096" 00:19:28.278 } 00:19:28.278 } 00:19:28.278 ]' 00:19:28.278 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.536 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.147 12:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.102 12:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.362 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.929 00:19:30.929 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.929 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.929 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.223 { 00:19:31.223 "cntlid": 125, 00:19:31.223 "qid": 0, 00:19:31.223 "state": "enabled", 00:19:31.223 "thread": "nvmf_tgt_poll_group_000", 00:19:31.223 "listen_address": { 00:19:31.223 "trtype": "TCP", 00:19:31.223 "adrfam": "IPv4", 00:19:31.223 "traddr": "10.0.0.2", 00:19:31.223 "trsvcid": "4420" 00:19:31.223 }, 00:19:31.223 "peer_address": { 00:19:31.223 "trtype": "TCP", 00:19:31.223 "adrfam": "IPv4", 00:19:31.223 "traddr": "10.0.0.1", 00:19:31.223 "trsvcid": "49334" 00:19:31.223 }, 00:19:31.223 "auth": { 00:19:31.223 "state": "completed", 00:19:31.223 "digest": "sha512", 00:19:31.223 "dhgroup": "ffdhe4096" 00:19:31.223 } 00:19:31.223 } 00:19:31.223 ]' 00:19:31.223 12:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.223 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.795 12:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.730 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.988 12:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.554 00:19:33.554 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.554 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.554 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.812 { 00:19:33.812 "cntlid": 127, 00:19:33.812 "qid": 0, 00:19:33.812 "state": "enabled", 00:19:33.812 "thread": "nvmf_tgt_poll_group_000", 00:19:33.812 "listen_address": { 00:19:33.812 "trtype": "TCP", 00:19:33.812 "adrfam": "IPv4", 00:19:33.812 "traddr": "10.0.0.2", 00:19:33.812 "trsvcid": "4420" 00:19:33.812 }, 00:19:33.812 "peer_address": { 00:19:33.812 "trtype": "TCP", 00:19:33.812 "adrfam": "IPv4", 00:19:33.812 "traddr": "10.0.0.1", 00:19:33.812 "trsvcid": "49358" 00:19:33.812 }, 00:19:33.812 "auth": { 00:19:33.812 "state": "completed", 00:19:33.812 "digest": "sha512", 00:19:33.812 "dhgroup": "ffdhe4096" 00:19:33.812 } 00:19:33.812 } 00:19:33.812 ]' 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.812 12:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.378 12:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.324 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.890 12:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.827 00:19:36.827 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.827 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.827 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.085 { 00:19:37.085 "cntlid": 129, 00:19:37.085 "qid": 0, 00:19:37.085 "state": "enabled", 00:19:37.085 "thread": "nvmf_tgt_poll_group_000", 00:19:37.085 "listen_address": { 00:19:37.085 "trtype": "TCP", 00:19:37.085 "adrfam": "IPv4", 00:19:37.085 "traddr": "10.0.0.2", 00:19:37.085 "trsvcid": "4420" 00:19:37.085 }, 00:19:37.085 "peer_address": { 00:19:37.085 "trtype": "TCP", 00:19:37.085 "adrfam": "IPv4", 00:19:37.085 "traddr": "10.0.0.1", 00:19:37.085 "trsvcid": "49394" 00:19:37.085 }, 00:19:37.085 "auth": { 00:19:37.085 "state": "completed", 00:19:37.085 "digest": "sha512", 00:19:37.085 "dhgroup": "ffdhe6144" 00:19:37.085 } 00:19:37.085 } 00:19:37.085 ]' 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.085 12:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.343 12:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:38.279 12:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.279 12:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:38.279 12:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.279 12:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.279 12:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.279 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.279 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.279 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.537 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.105 00:19:39.105 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.105 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.105 12:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.105 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.105 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.105 12:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.105 12:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.363 { 00:19:39.363 "cntlid": 131, 00:19:39.363 "qid": 0, 00:19:39.363 "state": "enabled", 00:19:39.363 "thread": "nvmf_tgt_poll_group_000", 00:19:39.363 "listen_address": { 00:19:39.363 "trtype": "TCP", 00:19:39.363 "adrfam": "IPv4", 00:19:39.363 "traddr": "10.0.0.2", 00:19:39.363 "trsvcid": "4420" 00:19:39.363 }, 00:19:39.363 "peer_address": { 00:19:39.363 "trtype": "TCP", 00:19:39.363 "adrfam": "IPv4", 00:19:39.363 "traddr": "10.0.0.1", 00:19:39.363 "trsvcid": "49438" 00:19:39.363 }, 00:19:39.363 "auth": { 00:19:39.363 "state": "completed", 00:19:39.363 "digest": "sha512", 00:19:39.363 "dhgroup": "ffdhe6144" 00:19:39.363 } 00:19:39.363 } 00:19:39.363 ]' 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.363 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.621 12:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.556 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.815 12:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.074 00:19:41.074 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.074 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.074 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.642 { 00:19:41.642 "cntlid": 133, 00:19:41.642 "qid": 0, 00:19:41.642 "state": "enabled", 00:19:41.642 "thread": "nvmf_tgt_poll_group_000", 00:19:41.642 "listen_address": { 00:19:41.642 "trtype": "TCP", 00:19:41.642 "adrfam": "IPv4", 00:19:41.642 "traddr": "10.0.0.2", 00:19:41.642 "trsvcid": "4420" 00:19:41.642 }, 00:19:41.642 "peer_address": { 00:19:41.642 "trtype": "TCP", 00:19:41.642 "adrfam": "IPv4", 00:19:41.642 "traddr": "10.0.0.1", 00:19:41.642 "trsvcid": "50796" 00:19:41.642 }, 00:19:41.642 "auth": { 00:19:41.642 "state": "completed", 00:19:41.642 "digest": "sha512", 00:19:41.642 "dhgroup": "ffdhe6144" 00:19:41.642 } 00:19:41.642 } 00:19:41.642 ]' 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.642 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.900 12:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 12:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.835 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.835 12:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.400 00:19:43.400 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.400 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.400 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.659 { 00:19:43.659 "cntlid": 135, 00:19:43.659 "qid": 0, 00:19:43.659 "state": "enabled", 00:19:43.659 "thread": "nvmf_tgt_poll_group_000", 00:19:43.659 "listen_address": { 00:19:43.659 "trtype": "TCP", 00:19:43.659 "adrfam": "IPv4", 00:19:43.659 "traddr": "10.0.0.2", 00:19:43.659 "trsvcid": "4420" 00:19:43.659 }, 00:19:43.659 "peer_address": { 00:19:43.659 "trtype": "TCP", 00:19:43.659 "adrfam": "IPv4", 00:19:43.659 "traddr": "10.0.0.1", 00:19:43.659 "trsvcid": "50818" 00:19:43.659 }, 00:19:43.659 "auth": { 00:19:43.659 "state": "completed", 00:19:43.659 "digest": "sha512", 00:19:43.659 "dhgroup": "ffdhe6144" 00:19:43.659 } 00:19:43.659 } 00:19:43.659 ]' 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.659 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.917 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.917 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.917 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.917 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.917 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.176 12:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.122 12:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.056 00:19:46.056 12:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.056 12:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.056 12:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.056 12:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.315 12:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.315 12:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.315 12:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.315 { 00:19:46.315 "cntlid": 137, 00:19:46.315 "qid": 0, 00:19:46.315 "state": "enabled", 00:19:46.315 "thread": "nvmf_tgt_poll_group_000", 00:19:46.315 "listen_address": { 00:19:46.315 "trtype": "TCP", 00:19:46.315 "adrfam": "IPv4", 00:19:46.315 "traddr": "10.0.0.2", 00:19:46.315 "trsvcid": "4420" 00:19:46.315 }, 00:19:46.315 "peer_address": { 00:19:46.315 "trtype": "TCP", 00:19:46.315 "adrfam": "IPv4", 00:19:46.315 "traddr": "10.0.0.1", 00:19:46.315 "trsvcid": "50858" 00:19:46.315 }, 00:19:46.315 "auth": { 00:19:46.315 "state": "completed", 00:19:46.315 "digest": "sha512", 00:19:46.315 "dhgroup": "ffdhe8192" 00:19:46.315 } 00:19:46.315 } 00:19:46.315 ]' 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.315 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.574 12:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.509 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.768 12:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.338 00:19:48.338 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.338 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.338 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.597 { 00:19:48.597 "cntlid": 139, 00:19:48.597 "qid": 0, 00:19:48.597 "state": "enabled", 00:19:48.597 "thread": "nvmf_tgt_poll_group_000", 00:19:48.597 "listen_address": { 00:19:48.597 "trtype": "TCP", 00:19:48.597 "adrfam": "IPv4", 00:19:48.597 "traddr": "10.0.0.2", 00:19:48.597 "trsvcid": "4420" 00:19:48.597 }, 00:19:48.597 "peer_address": { 00:19:48.597 "trtype": "TCP", 00:19:48.597 "adrfam": "IPv4", 00:19:48.597 "traddr": "10.0.0.1", 00:19:48.597 "trsvcid": "50894" 00:19:48.597 }, 00:19:48.597 "auth": { 00:19:48.597 "state": "completed", 00:19:48.597 "digest": "sha512", 00:19:48.597 "dhgroup": "ffdhe8192" 00:19:48.597 } 00:19:48.597 } 00:19:48.597 ]' 00:19:48.597 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.856 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.115 12:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTI1OGFhZGZkNTQyMDVkOGUzMTdhODA4MDBlZDBjNWapZaHf: --dhchap-ctrl-secret DHHC-1:02:YjM5N2FmODViNGZiNWU4MWY3ZGQ3MzcyYTI1NTM3Yjg5YzgxOTZmOGY2NWY0YjRmr5fL9g==: 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.052 12:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.312 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.881 00:19:50.881 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.881 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.881 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.141 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.141 12:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.141 12:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.141 12:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.141 12:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.141 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.141 { 00:19:51.141 "cntlid": 141, 00:19:51.141 "qid": 0, 00:19:51.141 "state": "enabled", 00:19:51.141 "thread": "nvmf_tgt_poll_group_000", 00:19:51.141 "listen_address": { 00:19:51.141 "trtype": "TCP", 00:19:51.141 "adrfam": "IPv4", 00:19:51.141 "traddr": "10.0.0.2", 00:19:51.141 "trsvcid": "4420" 00:19:51.141 }, 00:19:51.141 "peer_address": { 00:19:51.141 "trtype": "TCP", 00:19:51.141 "adrfam": "IPv4", 00:19:51.141 "traddr": "10.0.0.1", 00:19:51.141 "trsvcid": "60466" 00:19:51.141 }, 00:19:51.141 "auth": { 00:19:51.141 "state": "completed", 00:19:51.141 "digest": "sha512", 00:19:51.141 "dhgroup": "ffdhe8192" 00:19:51.141 } 00:19:51.141 } 00:19:51.141 ]' 00:19:51.141 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.141 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.141 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.400 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.400 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.400 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.400 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.400 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.658 12:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGQyMDZiODE0YWEzNDk4ODI1OGVjZWVmZWRmYTFiZTJmMjI3YjdiYjM0YjRmNjg02f23rQ==: --dhchap-ctrl-secret DHHC-1:01:NGFlMjE4MjAzZGU4N2Y2YTZmODhjNWIxMTJlYmQ5ZTFijzKD: 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.035 12:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.971 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.971 { 00:19:53.971 "cntlid": 143, 00:19:53.971 "qid": 0, 00:19:53.971 "state": "enabled", 00:19:53.971 "thread": "nvmf_tgt_poll_group_000", 00:19:53.971 "listen_address": { 00:19:53.971 "trtype": "TCP", 00:19:53.971 "adrfam": "IPv4", 00:19:53.971 "traddr": "10.0.0.2", 00:19:53.971 "trsvcid": "4420" 00:19:53.971 }, 00:19:53.971 "peer_address": { 00:19:53.971 "trtype": "TCP", 00:19:53.971 "adrfam": "IPv4", 00:19:53.971 "traddr": "10.0.0.1", 00:19:53.971 "trsvcid": "60480" 00:19:53.971 }, 00:19:53.971 "auth": { 00:19:53.971 "state": "completed", 00:19:53.971 "digest": "sha512", 00:19:53.971 "dhgroup": "ffdhe8192" 00:19:53.971 } 00:19:53.971 } 00:19:53.971 ]' 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.971 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.230 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.230 12:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.230 12:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.230 12:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.230 12:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.488 12:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.424 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.991 12:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.928 00:19:56.928 12:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.928 12:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.928 12:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.187 { 00:19:57.187 "cntlid": 145, 00:19:57.187 "qid": 0, 00:19:57.187 "state": "enabled", 00:19:57.187 "thread": "nvmf_tgt_poll_group_000", 00:19:57.187 "listen_address": { 00:19:57.187 "trtype": "TCP", 00:19:57.187 "adrfam": "IPv4", 00:19:57.187 "traddr": "10.0.0.2", 00:19:57.187 "trsvcid": "4420" 00:19:57.187 }, 00:19:57.187 "peer_address": { 00:19:57.187 "trtype": "TCP", 00:19:57.187 "adrfam": "IPv4", 00:19:57.187 "traddr": "10.0.0.1", 00:19:57.187 "trsvcid": "60496" 00:19:57.187 }, 00:19:57.187 "auth": { 00:19:57.187 "state": "completed", 00:19:57.187 "digest": "sha512", 00:19:57.187 "dhgroup": "ffdhe8192" 00:19:57.187 } 00:19:57.187 } 00:19:57.187 ]' 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.187 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.446 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.446 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.446 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.446 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.446 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.704 12:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRjMTIwZmE4NDg3Njk4MThjNWQxMmQ1NjhiOTU2MjAyODdmYTg3MDg1ZTQ2ZWEycInjqw==: --dhchap-ctrl-secret DHHC-1:03:ZGVlOTJiMTY0MzQzZDIxNDRlODZhNTZkMTY3YTc4MjgyMTNlZWM0MTU4NzkxMzYxMjc4ZjQzZjRkZmVlZDhiOP8EHms=: 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:58.272 12:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:59.209 request: 00:19:59.209 { 00:19:59.209 "name": "nvme0", 00:19:59.209 "trtype": "tcp", 00:19:59.209 "traddr": "10.0.0.2", 00:19:59.209 "adrfam": "ipv4", 00:19:59.209 "trsvcid": "4420", 00:19:59.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:59.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:59.209 "prchk_reftag": false, 00:19:59.209 "prchk_guard": false, 00:19:59.209 "hdgst": false, 00:19:59.209 "ddgst": false, 00:19:59.209 "dhchap_key": "key2", 00:19:59.209 "method": "bdev_nvme_attach_controller", 00:19:59.209 "req_id": 1 00:19:59.209 } 00:19:59.209 Got JSON-RPC error response 00:19:59.209 response: 00:19:59.209 { 00:19:59.209 "code": -5, 00:19:59.209 "message": "Input/output error" 00:19:59.209 } 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.209 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:00.207 request: 00:20:00.207 { 00:20:00.207 "name": "nvme0", 00:20:00.207 "trtype": "tcp", 00:20:00.207 "traddr": "10.0.0.2", 00:20:00.207 "adrfam": "ipv4", 00:20:00.207 "trsvcid": "4420", 00:20:00.207 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:00.207 "prchk_reftag": false, 00:20:00.207 "prchk_guard": false, 00:20:00.207 "hdgst": false, 00:20:00.207 "ddgst": false, 00:20:00.207 "dhchap_key": "key1", 00:20:00.207 "dhchap_ctrlr_key": "ckey2", 00:20:00.207 "method": "bdev_nvme_attach_controller", 00:20:00.207 "req_id": 1 00:20:00.207 } 00:20:00.207 Got JSON-RPC error response 00:20:00.207 response: 00:20:00.207 { 00:20:00.207 "code": -5, 00:20:00.207 "message": "Input/output error" 00:20:00.207 } 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.207 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.208 12:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.774 request: 00:20:00.774 { 00:20:00.774 "name": "nvme0", 00:20:00.774 "trtype": "tcp", 00:20:00.774 "traddr": "10.0.0.2", 00:20:00.774 "adrfam": "ipv4", 00:20:00.774 "trsvcid": "4420", 00:20:00.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:00.774 "prchk_reftag": false, 00:20:00.774 "prchk_guard": false, 00:20:00.774 "hdgst": false, 00:20:00.774 "ddgst": false, 00:20:00.774 "dhchap_key": "key1", 00:20:00.774 "dhchap_ctrlr_key": "ckey1", 00:20:00.774 "method": "bdev_nvme_attach_controller", 00:20:00.774 "req_id": 1 00:20:00.774 } 00:20:00.774 Got JSON-RPC error response 00:20:00.774 response: 00:20:00.774 { 00:20:00.774 "code": -5, 00:20:00.774 "message": "Input/output error" 00:20:00.774 } 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3926865 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3926865 ']' 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3926865 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926865 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926865' 00:20:00.774 killing process with pid 3926865 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3926865 00:20:00.774 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3926865 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3960085 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3960085 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3960085 ']' 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.033 12:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3960085 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3960085 ']' 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.600 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.859 12:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.816 00:20:02.816 12:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.816 12:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.816 12:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.075 12:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.075 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.075 12:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.075 12:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.075 12:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.075 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.075 { 00:20:03.075 "cntlid": 1, 00:20:03.075 "qid": 0, 00:20:03.075 "state": "enabled", 00:20:03.075 "thread": "nvmf_tgt_poll_group_000", 00:20:03.075 "listen_address": { 00:20:03.075 "trtype": "TCP", 00:20:03.075 "adrfam": "IPv4", 00:20:03.075 "traddr": "10.0.0.2", 00:20:03.075 "trsvcid": "4420" 00:20:03.075 }, 00:20:03.075 "peer_address": { 00:20:03.075 "trtype": "TCP", 00:20:03.075 "adrfam": "IPv4", 00:20:03.075 "traddr": "10.0.0.1", 00:20:03.075 "trsvcid": "48008" 00:20:03.075 }, 00:20:03.075 "auth": { 00:20:03.075 "state": "completed", 00:20:03.075 "digest": "sha512", 00:20:03.075 "dhgroup": "ffdhe8192" 00:20:03.075 } 00:20:03.075 } 00:20:03.075 ]' 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.332 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.589 12:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OWQyMzY3NTE1MWEzNmFkNzNhOWJkOTI4Nzk3YTdiNDhiMTliYWE4OWViYmE3N2JkNGYyMmIzZWJlNzFmMzFjMUedij8=: 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.521 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:04.522 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.087 12:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.345 request: 00:20:05.345 { 00:20:05.345 "name": "nvme0", 00:20:05.345 "trtype": "tcp", 00:20:05.345 "traddr": "10.0.0.2", 00:20:05.345 "adrfam": "ipv4", 00:20:05.345 "trsvcid": "4420", 00:20:05.345 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:05.345 "prchk_reftag": false, 00:20:05.345 "prchk_guard": false, 00:20:05.345 "hdgst": false, 00:20:05.345 "ddgst": false, 00:20:05.345 "dhchap_key": "key3", 00:20:05.345 "method": "bdev_nvme_attach_controller", 00:20:05.345 "req_id": 1 00:20:05.345 } 00:20:05.345 Got JSON-RPC error response 00:20:05.345 response: 00:20:05.345 { 00:20:05.345 "code": -5, 00:20:05.345 "message": "Input/output error" 00:20:05.345 } 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:05.345 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:05.604 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.604 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:05.604 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.604 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:05.604 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.605 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:05.605 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.605 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.605 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.864 request: 00:20:05.864 { 00:20:05.864 "name": "nvme0", 00:20:05.864 "trtype": "tcp", 00:20:05.864 "traddr": "10.0.0.2", 00:20:05.864 "adrfam": "ipv4", 00:20:05.864 "trsvcid": "4420", 00:20:05.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:05.864 "prchk_reftag": false, 00:20:05.864 "prchk_guard": false, 00:20:05.864 "hdgst": false, 00:20:05.864 "ddgst": false, 00:20:05.864 "dhchap_key": "key3", 00:20:05.864 "method": "bdev_nvme_attach_controller", 00:20:05.864 "req_id": 1 00:20:05.864 } 00:20:05.864 Got JSON-RPC error response 00:20:05.864 response: 00:20:05.864 { 00:20:05.864 "code": -5, 00:20:05.864 "message": "Input/output error" 00:20:05.864 } 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.864 12:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.430 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.688 request: 00:20:06.688 { 00:20:06.688 "name": "nvme0", 00:20:06.688 "trtype": "tcp", 00:20:06.688 "traddr": "10.0.0.2", 00:20:06.688 "adrfam": "ipv4", 00:20:06.688 "trsvcid": "4420", 00:20:06.688 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:06.688 "prchk_reftag": false, 00:20:06.688 "prchk_guard": false, 00:20:06.688 "hdgst": false, 00:20:06.688 "ddgst": false, 00:20:06.688 "dhchap_key": "key0", 00:20:06.688 "dhchap_ctrlr_key": "key1", 00:20:06.688 "method": "bdev_nvme_attach_controller", 00:20:06.688 "req_id": 1 00:20:06.688 } 00:20:06.688 Got JSON-RPC error response 00:20:06.688 response: 00:20:06.688 { 00:20:06.688 "code": -5, 00:20:06.688 "message": "Input/output error" 00:20:06.688 } 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.688 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.946 00:20:07.204 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:07.204 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:07.205 12:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.463 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.463 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.463 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3927048 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3927048 ']' 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3927048 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927048 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927048' 00:20:07.721 killing process with pid 3927048 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3927048 00:20:07.721 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3927048 00:20:07.979 12:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.980 rmmod nvme_tcp 00:20:07.980 rmmod nvme_fabrics 00:20:07.980 rmmod nvme_keyring 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3960085 ']' 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3960085 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3960085 ']' 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3960085 00:20:07.980 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3960085 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3960085' 00:20:08.238 killing process with pid 3960085 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3960085 00:20:08.238 12:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3960085 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.238 12:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.773 12:50:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.773 12:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Dah /tmp/spdk.key-sha256.wTA /tmp/spdk.key-sha384.8Tk /tmp/spdk.key-sha512.i91 /tmp/spdk.key-sha512.g7Q /tmp/spdk.key-sha384.mTj /tmp/spdk.key-sha256.LLl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:10.773 00:20:10.773 real 3m12.925s 00:20:10.773 user 7m36.852s 00:20:10.773 sys 0m26.077s 00:20:10.773 12:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.773 12:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.773 ************************************ 00:20:10.773 END TEST nvmf_auth_target 00:20:10.773 ************************************ 00:20:10.773 12:50:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.773 12:50:02 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:10.773 12:50:02 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.773 12:50:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:10.773 12:50:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.773 12:50:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.773 ************************************ 00:20:10.773 START TEST nvmf_bdevio_no_huge 00:20:10.773 ************************************ 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.773 * Looking for test storage... 00:20:10.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.773 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.774 12:50:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:16.052 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:16.052 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:16.052 Found net devices under 0000:af:00.0: cvl_0_0 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.052 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:16.053 Found net devices under 0000:af:00.1: cvl_0_1 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.053 12:50:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:20:16.312 00:20:16.312 --- 10.0.0.2 ping statistics --- 00:20:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.312 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:16.312 00:20:16.312 --- 10.0.0.1 ping statistics --- 00:20:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.312 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3964922 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3964922 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3964922 ']' 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.312 12:50:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:16.570 [2024-07-15 12:50:08.285925] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:16.570 [2024-07-15 12:50:08.285984] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:16.571 [2024-07-15 12:50:08.397030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.830 [2024-07-15 12:50:08.628672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.830 [2024-07-15 12:50:08.628734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.830 [2024-07-15 12:50:08.628756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.830 [2024-07-15 12:50:08.628774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.830 [2024-07-15 12:50:08.628790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.830 [2024-07-15 12:50:08.628891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:16.830 [2024-07-15 12:50:08.629041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:16.830 [2024-07-15 12:50:08.629367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:16.830 [2024-07-15 12:50:08.629379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 [2024-07-15 12:50:09.276622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 Malloc0 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.397 [2024-07-15 12:50:09.329970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:17.397 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:17.398 { 00:20:17.398 "params": { 00:20:17.398 "name": "Nvme$subsystem", 00:20:17.398 "trtype": "$TEST_TRANSPORT", 00:20:17.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.398 "adrfam": "ipv4", 00:20:17.398 "trsvcid": "$NVMF_PORT", 00:20:17.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.398 "hdgst": ${hdgst:-false}, 00:20:17.398 "ddgst": ${ddgst:-false} 00:20:17.398 }, 00:20:17.398 "method": "bdev_nvme_attach_controller" 00:20:17.398 } 00:20:17.398 EOF 00:20:17.398 )") 00:20:17.656 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:17.656 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:17.656 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:17.656 12:50:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:17.656 "params": { 00:20:17.656 "name": "Nvme1", 00:20:17.656 "trtype": "tcp", 00:20:17.656 "traddr": "10.0.0.2", 00:20:17.656 "adrfam": "ipv4", 00:20:17.656 "trsvcid": "4420", 00:20:17.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.656 "hdgst": false, 00:20:17.656 "ddgst": false 00:20:17.656 }, 00:20:17.656 "method": "bdev_nvme_attach_controller" 00:20:17.656 }' 00:20:17.656 [2024-07-15 12:50:09.383676] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:17.656 [2024-07-15 12:50:09.383741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3965189 ] 00:20:17.656 [2024-07-15 12:50:09.469624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.656 [2024-07-15 12:50:09.588029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.656 [2024-07-15 12:50:09.588144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.656 [2024-07-15 12:50:09.588144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.915 I/O targets: 00:20:17.915 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:17.915 00:20:17.915 00:20:17.915 CUnit - A unit testing framework for C - Version 2.1-3 00:20:17.915 http://cunit.sourceforge.net/ 00:20:17.915 00:20:17.915 00:20:17.915 Suite: bdevio tests on: Nvme1n1 00:20:17.915 Test: blockdev write read block ...passed 00:20:17.915 Test: blockdev write zeroes read block ...passed 00:20:17.915 Test: blockdev write zeroes read no split ...passed 00:20:18.174 Test: blockdev write zeroes read split ...passed 00:20:18.174 Test: blockdev write zeroes read split partial ...passed 00:20:18.174 Test: blockdev reset ...[2024-07-15 12:50:09.956896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:18.174 [2024-07-15 12:50:09.956974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9d520 (9): Bad file descriptor 00:20:18.174 [2024-07-15 12:50:09.968436] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:18.174 passed 00:20:18.174 Test: blockdev write read 8 blocks ...passed 00:20:18.174 Test: blockdev write read size > 128k ...passed 00:20:18.174 Test: blockdev write read invalid size ...passed 00:20:18.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:18.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:18.174 Test: blockdev write read max offset ...passed 00:20:18.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:18.433 Test: blockdev writev readv 8 blocks ...passed 00:20:18.433 Test: blockdev writev readv 30 x 1block ...passed 00:20:18.433 Test: blockdev writev readv block ...passed 00:20:18.433 Test: blockdev writev readv size > 128k ...passed 00:20:18.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:18.433 Test: blockdev comparev and writev ...[2024-07-15 12:50:10.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.226416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.226458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.226482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.227185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.227218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.227264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.227288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.227980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.228011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.228048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.228069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.228770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.228802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.228838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:18.433 [2024-07-15 12:50:10.228860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:18.433 passed 00:20:18.433 Test: blockdev nvme passthru rw ...passed 00:20:18.433 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:50:10.310826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.433 [2024-07-15 12:50:10.310872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.311130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.433 [2024-07-15 12:50:10.311161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.311425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.433 [2024-07-15 12:50:10.311456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.433 [2024-07-15 12:50:10.311716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:18.433 [2024-07-15 12:50:10.311745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.433 passed 00:20:18.433 Test: blockdev nvme admin passthru ...passed 00:20:18.433 Test: blockdev copy ...passed 00:20:18.433 00:20:18.433 Run Summary: Type Total Ran Passed Failed Inactive 00:20:18.433 suites 1 1 n/a 0 0 00:20:18.433 tests 23 23 23 0 0 00:20:18.433 asserts 152 152 152 0 n/a 00:20:18.433 00:20:18.433 Elapsed time = 1.230 seconds 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:19.002 rmmod nvme_tcp 00:20:19.002 rmmod nvme_fabrics 00:20:19.002 rmmod nvme_keyring 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3964922 ']' 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3964922 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3964922 ']' 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3964922 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3964922 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3964922' 00:20:19.002 killing process with pid 3964922 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3964922 00:20:19.002 12:50:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3964922 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.938 12:50:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.841 12:50:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.841 00:20:21.841 real 0m11.326s 00:20:21.841 user 0m14.584s 00:20:21.841 sys 0m5.835s 00:20:21.841 12:50:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.841 12:50:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.841 ************************************ 00:20:21.841 END TEST nvmf_bdevio_no_huge 00:20:21.841 ************************************ 00:20:21.841 12:50:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:21.841 12:50:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:21.841 12:50:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:21.841 12:50:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.841 12:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.841 ************************************ 00:20:21.841 START TEST nvmf_tls 00:20:21.841 ************************************ 00:20:21.841 12:50:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.101 * Looking for test storage... 00:20:22.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.101 12:50:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.669 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:28.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:28.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:28.670 Found net devices under 0000:af:00.0: cvl_0_0 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:28.670 Found net devices under 0000:af:00.1: cvl_0_1 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:20:28.670 00:20:28.670 --- 10.0.0.2 ping statistics --- 00:20:28.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.670 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:20:28.670 00:20:28.670 --- 10.0.0.1 ping statistics --- 00:20:28.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.670 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3969175 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3969175 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3969175 ']' 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.670 12:50:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.670 [2024-07-15 12:50:19.783215] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:28.670 [2024-07-15 12:50:19.783279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.670 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.670 [2024-07-15 12:50:19.871910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.670 [2024-07-15 12:50:19.974007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.670 [2024-07-15 12:50:19.974055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.670 [2024-07-15 12:50:19.974068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.670 [2024-07-15 12:50:19.974079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.670 [2024-07-15 12:50:19.974088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.670 [2024-07-15 12:50:19.974114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:29.240 12:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:29.240 true 00:20:29.240 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:29.240 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:29.499 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:29.499 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:29.499 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:30.067 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.067 12:50:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:30.326 12:50:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:30.326 12:50:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:30.326 12:50:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:30.894 12:50:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.894 12:50:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:31.462 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:31.462 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:31.462 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.462 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:31.720 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:31.720 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:31.720 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:32.287 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.287 12:50:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:32.287 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:32.287 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:32.287 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:32.545 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.545 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:32.804 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.XLqcQl73TT 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.mNIgDKfxVH 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XLqcQl73TT 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mNIgDKfxVH 00:20:33.063 12:50:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:33.322 12:50:25 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:33.889 12:50:25 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XLqcQl73TT 00:20:33.889 12:50:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XLqcQl73TT 00:20:33.889 12:50:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.458 [2024-07-15 12:50:26.112286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.458 12:50:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.717 12:50:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.284 [2024-07-15 12:50:27.123081] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.284 [2024-07-15 12:50:27.123325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.284 12:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:35.543 malloc0 00:20:35.543 12:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.110 12:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XLqcQl73TT 00:20:36.684 [2024-07-15 12:50:28.368813] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:36.684 12:50:28 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XLqcQl73TT 00:20:36.684 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.762 Initializing NVMe Controllers 00:20:46.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.762 Initialization complete. Launching workers. 00:20:46.762 ======================================================== 00:20:46.762 Latency(us) 00:20:46.762 Device Information : IOPS MiB/s Average min max 00:20:46.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8423.05 32.90 7600.41 1172.01 8337.91 00:20:46.762 ======================================================== 00:20:46.762 Total : 8423.05 32.90 7600.41 1172.01 8337.91 00:20:46.762 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XLqcQl73TT 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XLqcQl73TT' 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3972598 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3972598 /var/tmp/bdevperf.sock 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3972598 ']' 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.762 12:50:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.762 [2024-07-15 12:50:38.629240] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:46.762 [2024-07-15 12:50:38.629371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972598 ] 00:20:46.762 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.021 [2024-07-15 12:50:38.775999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.021 [2024-07-15 12:50:38.924150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.280 12:50:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.280 12:50:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.280 12:50:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XLqcQl73TT 00:20:47.538 [2024-07-15 12:50:39.283359] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.538 [2024-07-15 12:50:39.283508] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.538 TLSTESTn1 00:20:47.538 12:50:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.798 Running I/O for 10 seconds... 00:20:57.777 00:20:57.777 Latency(us) 00:20:57.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.777 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.777 Verification LBA range: start 0x0 length 0x2000 00:20:57.777 TLSTESTn1 : 10.03 2829.01 11.05 0.00 0.00 45103.69 12988.04 47900.86 00:20:57.777 =================================================================================================================== 00:20:57.777 Total : 2829.01 11.05 0.00 0.00 45103.69 12988.04 47900.86 00:20:57.777 0 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3972598 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3972598 ']' 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3972598 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3972598 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3972598' 00:20:57.777 killing process with pid 3972598 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3972598 00:20:57.777 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.777 00:20:57.777 Latency(us) 00:20:57.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.777 =================================================================================================================== 00:20:57.777 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.777 [2024-07-15 12:50:49.633477] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.777 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3972598 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mNIgDKfxVH 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mNIgDKfxVH 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mNIgDKfxVH 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mNIgDKfxVH' 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3974462 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3974462 /var/tmp/bdevperf.sock 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3974462 ']' 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.346 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.347 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.347 12:50:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.347 [2024-07-15 12:50:50.046908] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:58.347 [2024-07-15 12:50:50.046984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974462 ] 00:20:58.347 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.347 [2024-07-15 12:50:50.163399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.605 [2024-07-15 12:50:50.311147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.172 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.172 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.172 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mNIgDKfxVH 00:20:59.430 [2024-07-15 12:50:51.243205] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.430 [2024-07-15 12:50:51.243376] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:59.430 [2024-07-15 12:50:51.252139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.430 [2024-07-15 12:50:51.252382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667af0 (107): Transport endpoint is not connected 00:20:59.430 [2024-07-15 12:50:51.253361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667af0 (9): Bad file descriptor 00:20:59.430 [2024-07-15 12:50:51.254359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:59.430 [2024-07-15 12:50:51.254386] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.430 [2024-07-15 12:50:51.254412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.430 request: 00:20:59.430 { 00:20:59.430 "name": "TLSTEST", 00:20:59.430 "trtype": "tcp", 00:20:59.430 "traddr": "10.0.0.2", 00:20:59.430 "adrfam": "ipv4", 00:20:59.430 "trsvcid": "4420", 00:20:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.430 "prchk_reftag": false, 00:20:59.430 "prchk_guard": false, 00:20:59.430 "hdgst": false, 00:20:59.430 "ddgst": false, 00:20:59.430 "psk": "/tmp/tmp.mNIgDKfxVH", 00:20:59.430 "method": "bdev_nvme_attach_controller", 00:20:59.430 "req_id": 1 00:20:59.430 } 00:20:59.430 Got JSON-RPC error response 00:20:59.430 response: 00:20:59.430 { 00:20:59.430 "code": -5, 00:20:59.430 "message": "Input/output error" 00:20:59.430 } 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3974462 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3974462 ']' 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3974462 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3974462 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3974462' 00:20:59.430 killing process with pid 3974462 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3974462 00:20:59.430 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.430 00:20:59.430 Latency(us) 00:20:59.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.430 =================================================================================================================== 00:20:59.430 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.430 [2024-07-15 12:50:51.335192] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.430 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3974462 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XLqcQl73TT 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XLqcQl73TT 00:20:59.689 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XLqcQl73TT 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XLqcQl73TT' 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3974737 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3974737 /var/tmp/bdevperf.sock 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3974737 ']' 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.949 12:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.949 [2024-07-15 12:50:51.684563] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:20:59.949 [2024-07-15 12:50:51.684640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974737 ] 00:20:59.949 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.949 [2024-07-15 12:50:51.800570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.208 [2024-07-15 12:50:51.944655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.775 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.775 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.775 12:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XLqcQl73TT 00:21:01.034 [2024-07-15 12:50:52.787286] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.034 [2024-07-15 12:50:52.787436] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.034 [2024-07-15 12:50:52.795941] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.034 [2024-07-15 12:50:52.795974] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.034 [2024-07-15 12:50:52.796017] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.034 [2024-07-15 12:50:52.796322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc25af0 (107): Transport endpoint is not connected 00:21:01.034 [2024-07-15 12:50:52.797302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc25af0 (9): Bad file descriptor 00:21:01.034 [2024-07-15 12:50:52.798300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.034 [2024-07-15 12:50:52.798329] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.034 [2024-07-15 12:50:52.798354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.034 request: 00:21:01.034 { 00:21:01.034 "name": "TLSTEST", 00:21:01.034 "trtype": "tcp", 00:21:01.034 "traddr": "10.0.0.2", 00:21:01.034 "adrfam": "ipv4", 00:21:01.034 "trsvcid": "4420", 00:21:01.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.034 "prchk_reftag": false, 00:21:01.034 "prchk_guard": false, 00:21:01.034 "hdgst": false, 00:21:01.034 "ddgst": false, 00:21:01.034 "psk": "/tmp/tmp.XLqcQl73TT", 00:21:01.034 "method": "bdev_nvme_attach_controller", 00:21:01.034 "req_id": 1 00:21:01.034 } 00:21:01.034 Got JSON-RPC error response 00:21:01.034 response: 00:21:01.034 { 00:21:01.034 "code": -5, 00:21:01.034 "message": "Input/output error" 00:21:01.034 } 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3974737 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3974737 ']' 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3974737 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3974737 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3974737' 00:21:01.034 killing process with pid 3974737 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3974737 00:21:01.034 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.034 00:21:01.034 Latency(us) 00:21:01.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.034 =================================================================================================================== 00:21:01.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.034 [2024-07-15 12:50:52.879632] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.034 12:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3974737 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XLqcQl73TT 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XLqcQl73TT 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XLqcQl73TT 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XLqcQl73TT' 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3975008 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3975008 /var/tmp/bdevperf.sock 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3975008 ']' 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.293 12:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.293 [2024-07-15 12:50:53.216125] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:01.293 [2024-07-15 12:50:53.216188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975008 ] 00:21:01.552 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.552 [2024-07-15 12:50:53.330392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.552 [2024-07-15 12:50:53.477065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.495 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.495 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:02.495 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XLqcQl73TT 00:21:02.495 [2024-07-15 12:50:54.410455] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.495 [2024-07-15 12:50:54.410612] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.495 [2024-07-15 12:50:54.419189] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.495 [2024-07-15 12:50:54.419224] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.495 [2024-07-15 12:50:54.419278] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.495 [2024-07-15 12:50:54.419573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814af0 (107): Transport endpoint is not connected 00:21:02.495 [2024-07-15 12:50:54.420555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814af0 (9): Bad file descriptor 00:21:02.495 [2024-07-15 12:50:54.421553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:02.495 [2024-07-15 12:50:54.421581] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.495 [2024-07-15 12:50:54.421605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:02.495 request: 00:21:02.495 { 00:21:02.495 "name": "TLSTEST", 00:21:02.495 "trtype": "tcp", 00:21:02.495 "traddr": "10.0.0.2", 00:21:02.495 "adrfam": "ipv4", 00:21:02.495 "trsvcid": "4420", 00:21:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.495 "prchk_reftag": false, 00:21:02.495 "prchk_guard": false, 00:21:02.495 "hdgst": false, 00:21:02.495 "ddgst": false, 00:21:02.495 "psk": "/tmp/tmp.XLqcQl73TT", 00:21:02.495 "method": "bdev_nvme_attach_controller", 00:21:02.495 "req_id": 1 00:21:02.495 } 00:21:02.495 Got JSON-RPC error response 00:21:02.495 response: 00:21:02.495 { 00:21:02.495 "code": -5, 00:21:02.495 "message": "Input/output error" 00:21:02.495 } 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3975008 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3975008 ']' 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3975008 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3975008 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3975008' 00:21:02.755 killing process with pid 3975008 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3975008 00:21:02.755 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.755 00:21:02.755 Latency(us) 00:21:02.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.755 =================================================================================================================== 00:21:02.755 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.755 [2024-07-15 12:50:54.505123] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.755 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3975008 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3975349 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3975349 /var/tmp/bdevperf.sock 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3975349 ']' 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.014 12:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.014 [2024-07-15 12:50:54.904328] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:03.014 [2024-07-15 12:50:54.904396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975349 ] 00:21:03.014 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.273 [2024-07-15 12:50:55.018489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.273 [2024-07-15 12:50:55.164580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.209 12:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.209 12:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.209 12:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:04.209 [2024-07-15 12:50:56.100667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:04.209 [2024-07-15 12:50:56.102162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216c030 (9): Bad file descriptor 00:21:04.209 [2024-07-15 12:50:56.103157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.209 [2024-07-15 12:50:56.103184] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:04.209 [2024-07-15 12:50:56.103209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.209 request: 00:21:04.209 { 00:21:04.209 "name": "TLSTEST", 00:21:04.209 "trtype": "tcp", 00:21:04.209 "traddr": "10.0.0.2", 00:21:04.209 "adrfam": "ipv4", 00:21:04.209 "trsvcid": "4420", 00:21:04.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.209 "prchk_reftag": false, 00:21:04.209 "prchk_guard": false, 00:21:04.209 "hdgst": false, 00:21:04.209 "ddgst": false, 00:21:04.209 "method": "bdev_nvme_attach_controller", 00:21:04.209 "req_id": 1 00:21:04.209 } 00:21:04.209 Got JSON-RPC error response 00:21:04.209 response: 00:21:04.209 { 00:21:04.209 "code": -5, 00:21:04.209 "message": "Input/output error" 00:21:04.209 } 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3975349 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3975349 ']' 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3975349 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.209 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3975349 00:21:04.468 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:04.468 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:04.468 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3975349' 00:21:04.468 killing process with pid 3975349 00:21:04.468 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3975349 00:21:04.468 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.468 00:21:04.468 Latency(us) 00:21:04.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.468 =================================================================================================================== 00:21:04.468 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.468 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3975349 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3969175 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3969175 ']' 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3969175 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3969175 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3969175' 00:21:04.727 killing process with pid 3969175 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3969175 00:21:04.727 [2024-07-15 12:50:56.562591] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.727 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3969175 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:04.985 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ZAw7DNhBVc 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ZAw7DNhBVc 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3975812 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3975812 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3975812 ']' 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.245 12:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.245 [2024-07-15 12:50:57.008066] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:05.245 [2024-07-15 12:50:57.008133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.245 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.245 [2024-07-15 12:50:57.096563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.503 [2024-07-15 12:50:57.193752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.503 [2024-07-15 12:50:57.193806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.503 [2024-07-15 12:50:57.193820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.503 [2024-07-15 12:50:57.193831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.503 [2024-07-15 12:50:57.193841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.503 [2024-07-15 12:50:57.193870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZAw7DNhBVc 00:21:06.071 12:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.329 [2024-07-15 12:50:58.135587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.329 12:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.587 12:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.845 [2024-07-15 12:50:58.624916] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.845 [2024-07-15 12:50:58.625159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.845 12:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.103 malloc0 00:21:07.103 12:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.361 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:07.619 [2024-07-15 12:50:59.345279] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZAw7DNhBVc 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZAw7DNhBVc' 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3976173 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3976173 /var/tmp/bdevperf.sock 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3976173 ']' 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.619 12:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.619 [2024-07-15 12:50:59.415019] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:07.619 [2024-07-15 12:50:59.415076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976173 ] 00:21:07.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.619 [2024-07-15 12:50:59.526844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.878 [2024-07-15 12:50:59.674588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.444 12:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.444 12:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.444 12:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:08.703 [2024-07-15 12:51:00.590447] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.703 [2024-07-15 12:51:00.590602] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.961 TLSTESTn1 00:21:08.962 12:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.962 Running I/O for 10 seconds... 00:21:18.935 00:21:18.935 Latency(us) 00:21:18.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.935 Verification LBA range: start 0x0 length 0x2000 00:21:18.935 TLSTESTn1 : 10.02 2804.52 10.96 0.00 0.00 45519.73 9770.82 43372.92 00:21:18.935 =================================================================================================================== 00:21:18.935 Total : 2804.52 10.96 0.00 0.00 45519.73 9770.82 43372.92 00:21:18.935 0 00:21:18.935 12:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.935 12:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3976173 00:21:18.935 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3976173 ']' 00:21:18.935 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3976173 00:21:18.935 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3976173 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3976173' 00:21:19.194 killing process with pid 3976173 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3976173 00:21:19.194 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.194 00:21:19.194 Latency(us) 00:21:19.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.194 =================================================================================================================== 00:21:19.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.194 [2024-07-15 12:51:10.921993] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:19.194 12:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3976173 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ZAw7DNhBVc 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZAw7DNhBVc 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZAw7DNhBVc 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZAw7DNhBVc 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZAw7DNhBVc' 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3978722 00:21:19.453 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3978722 /var/tmp/bdevperf.sock 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3978722 ']' 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.454 12:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.454 [2024-07-15 12:51:11.342456] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:19.454 [2024-07-15 12:51:11.342548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978722 ] 00:21:19.454 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.712 [2024-07-15 12:51:11.457592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.712 [2024-07-15 12:51:11.599689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.278 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.278 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.278 12:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:20.537 [2024-07-15 12:51:12.363350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.537 [2024-07-15 12:51:12.363450] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:20.537 [2024-07-15 12:51:12.363470] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ZAw7DNhBVc 00:21:20.537 request: 00:21:20.537 { 00:21:20.537 "name": "TLSTEST", 00:21:20.537 "trtype": "tcp", 00:21:20.537 "traddr": "10.0.0.2", 00:21:20.537 "adrfam": "ipv4", 00:21:20.537 "trsvcid": "4420", 00:21:20.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.537 "prchk_reftag": false, 00:21:20.537 "prchk_guard": false, 00:21:20.537 "hdgst": false, 00:21:20.537 "ddgst": false, 00:21:20.537 "psk": "/tmp/tmp.ZAw7DNhBVc", 00:21:20.537 "method": "bdev_nvme_attach_controller", 00:21:20.537 "req_id": 1 00:21:20.537 } 00:21:20.537 Got JSON-RPC error response 00:21:20.537 response: 00:21:20.537 { 00:21:20.537 "code": -1, 00:21:20.537 "message": "Operation not permitted" 00:21:20.537 } 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3978722 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3978722 ']' 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3978722 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3978722 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3978722' 00:21:20.537 killing process with pid 3978722 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3978722 00:21:20.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.537 00:21:20.537 Latency(us) 00:21:20.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.537 =================================================================================================================== 00:21:20.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.537 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3978722 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3975812 00:21:20.796 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3975812 ']' 00:21:20.797 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3975812 00:21:20.797 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.797 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.797 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3975812 00:21:21.055 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.055 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.055 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3975812' 00:21:21.055 killing process with pid 3975812 00:21:21.055 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3975812 00:21:21.055 [2024-07-15 12:51:12.771637] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.055 12:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3975812 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3979137 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3979137 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3979137 ']' 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.314 12:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.314 [2024-07-15 12:51:13.152355] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:21.314 [2024-07-15 12:51:13.152427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.314 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.314 [2024-07-15 12:51:13.243335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.572 [2024-07-15 12:51:13.348870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.572 [2024-07-15 12:51:13.348919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.572 [2024-07-15 12:51:13.348933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.572 [2024-07-15 12:51:13.348944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.572 [2024-07-15 12:51:13.348953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.572 [2024-07-15 12:51:13.348979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZAw7DNhBVc 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.506 [2024-07-15 12:51:14.375063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.506 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.765 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.024 [2024-07-15 12:51:14.892466] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.024 [2024-07-15 12:51:14.892712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.024 12:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.281 malloc0 00:21:23.281 12:51:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:23.539 12:51:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:23.797 [2024-07-15 12:51:15.677071] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:23.797 [2024-07-15 12:51:15.677106] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:23.797 [2024-07-15 12:51:15.677153] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:23.797 request: 00:21:23.797 { 00:21:23.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.797 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.797 "psk": "/tmp/tmp.ZAw7DNhBVc", 00:21:23.797 "method": "nvmf_subsystem_add_host", 00:21:23.797 "req_id": 1 00:21:23.797 } 00:21:23.797 Got JSON-RPC error response 00:21:23.797 response: 00:21:23.797 { 00:21:23.797 "code": -32603, 00:21:23.797 "message": "Internal error" 00:21:23.797 } 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3979137 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3979137 ']' 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3979137 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.797 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3979137 00:21:24.056 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.056 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.056 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3979137' 00:21:24.056 killing process with pid 3979137 00:21:24.056 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3979137 00:21:24.056 12:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3979137 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ZAw7DNhBVc 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3979680 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3979680 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3979680 ']' 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.315 12:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.315 [2024-07-15 12:51:16.132514] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:24.315 [2024-07-15 12:51:16.132583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.315 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.315 [2024-07-15 12:51:16.220559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.572 [2024-07-15 12:51:16.321069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.572 [2024-07-15 12:51:16.321120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.572 [2024-07-15 12:51:16.321133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.572 [2024-07-15 12:51:16.321144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.572 [2024-07-15 12:51:16.321153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.572 [2024-07-15 12:51:16.321195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZAw7DNhBVc 00:21:25.137 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:25.395 [2024-07-15 12:51:17.269473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.395 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:25.653 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:25.911 [2024-07-15 12:51:17.710643] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.911 [2024-07-15 12:51:17.710880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.911 12:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.169 malloc0 00:21:26.169 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.442 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:26.773 [2024-07-15 12:51:18.499305] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3980097 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3980097 /var/tmp/bdevperf.sock 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3980097 ']' 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.773 12:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.773 [2024-07-15 12:51:18.583938] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:26.773 [2024-07-15 12:51:18.584002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980097 ] 00:21:26.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.075 [2024-07-15 12:51:18.700094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.075 [2024-07-15 12:51:18.848392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.643 12:51:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.643 12:51:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.643 12:51:19 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:27.901 [2024-07-15 12:51:19.676376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.901 [2024-07-15 12:51:19.676541] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.901 TLSTESTn1 00:21:27.901 12:51:19 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:28.469 12:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:28.469 "subsystems": [ 00:21:28.469 { 00:21:28.469 "subsystem": "keyring", 00:21:28.469 "config": [] 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "subsystem": "iobuf", 00:21:28.469 "config": [ 00:21:28.469 { 00:21:28.469 "method": "iobuf_set_options", 00:21:28.469 "params": { 00:21:28.469 "small_pool_count": 8192, 00:21:28.469 "large_pool_count": 1024, 00:21:28.469 "small_bufsize": 8192, 00:21:28.469 "large_bufsize": 135168 00:21:28.469 } 00:21:28.469 } 00:21:28.469 ] 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "subsystem": "sock", 00:21:28.469 "config": [ 00:21:28.469 { 00:21:28.469 "method": "sock_set_default_impl", 00:21:28.469 "params": { 00:21:28.469 "impl_name": "posix" 00:21:28.469 } 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "method": "sock_impl_set_options", 00:21:28.469 "params": { 00:21:28.469 "impl_name": "ssl", 00:21:28.469 "recv_buf_size": 4096, 00:21:28.469 "send_buf_size": 4096, 00:21:28.469 "enable_recv_pipe": true, 00:21:28.469 "enable_quickack": false, 00:21:28.469 "enable_placement_id": 0, 00:21:28.469 "enable_zerocopy_send_server": true, 00:21:28.469 "enable_zerocopy_send_client": false, 00:21:28.469 "zerocopy_threshold": 0, 00:21:28.469 "tls_version": 0, 00:21:28.469 "enable_ktls": false 00:21:28.469 } 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "method": "sock_impl_set_options", 00:21:28.469 "params": { 00:21:28.469 "impl_name": "posix", 00:21:28.469 "recv_buf_size": 2097152, 00:21:28.469 "send_buf_size": 2097152, 00:21:28.469 "enable_recv_pipe": true, 00:21:28.469 "enable_quickack": false, 00:21:28.469 "enable_placement_id": 0, 00:21:28.469 "enable_zerocopy_send_server": true, 00:21:28.469 "enable_zerocopy_send_client": false, 00:21:28.469 "zerocopy_threshold": 0, 00:21:28.469 "tls_version": 0, 00:21:28.469 "enable_ktls": false 00:21:28.469 } 00:21:28.469 } 00:21:28.469 ] 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "subsystem": "vmd", 00:21:28.469 "config": [] 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "subsystem": "accel", 00:21:28.469 "config": [ 00:21:28.469 { 00:21:28.469 "method": "accel_set_options", 00:21:28.469 "params": { 00:21:28.469 "small_cache_size": 128, 00:21:28.469 "large_cache_size": 16, 00:21:28.469 "task_count": 2048, 00:21:28.469 "sequence_count": 2048, 00:21:28.469 "buf_count": 2048 00:21:28.469 } 00:21:28.469 } 00:21:28.469 ] 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "subsystem": "bdev", 00:21:28.469 "config": [ 00:21:28.469 { 00:21:28.469 "method": "bdev_set_options", 00:21:28.469 "params": { 00:21:28.469 "bdev_io_pool_size": 65535, 00:21:28.469 "bdev_io_cache_size": 256, 00:21:28.469 "bdev_auto_examine": true, 00:21:28.469 "iobuf_small_cache_size": 128, 00:21:28.469 "iobuf_large_cache_size": 16 00:21:28.469 } 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "method": "bdev_raid_set_options", 00:21:28.469 "params": { 00:21:28.469 "process_window_size_kb": 1024 00:21:28.469 } 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "method": "bdev_iscsi_set_options", 00:21:28.469 "params": { 00:21:28.469 "timeout_sec": 30 00:21:28.469 } 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "method": "bdev_nvme_set_options", 00:21:28.469 "params": { 00:21:28.469 "action_on_timeout": "none", 00:21:28.469 "timeout_us": 0, 00:21:28.469 "timeout_admin_us": 0, 00:21:28.469 "keep_alive_timeout_ms": 10000, 00:21:28.469 "arbitration_burst": 0, 00:21:28.469 "low_priority_weight": 0, 00:21:28.469 "medium_priority_weight": 0, 00:21:28.469 "high_priority_weight": 0, 00:21:28.469 "nvme_adminq_poll_period_us": 10000, 00:21:28.469 "nvme_ioq_poll_period_us": 0, 00:21:28.469 "io_queue_requests": 0, 00:21:28.469 "delay_cmd_submit": true, 00:21:28.469 "transport_retry_count": 4, 00:21:28.469 "bdev_retry_count": 3, 00:21:28.469 "transport_ack_timeout": 0, 00:21:28.469 "ctrlr_loss_timeout_sec": 0, 00:21:28.469 "reconnect_delay_sec": 0, 00:21:28.469 "fast_io_fail_timeout_sec": 0, 00:21:28.469 "disable_auto_failback": false, 00:21:28.469 "generate_uuids": false, 00:21:28.469 "transport_tos": 0, 00:21:28.469 "nvme_error_stat": false, 00:21:28.469 "rdma_srq_size": 0, 00:21:28.469 "io_path_stat": false, 00:21:28.469 "allow_accel_sequence": false, 00:21:28.470 "rdma_max_cq_size": 0, 00:21:28.470 "rdma_cm_event_timeout_ms": 0, 00:21:28.470 "dhchap_digests": [ 00:21:28.470 "sha256", 00:21:28.470 "sha384", 00:21:28.470 "sha512" 00:21:28.470 ], 00:21:28.470 "dhchap_dhgroups": [ 00:21:28.470 "null", 00:21:28.470 "ffdhe2048", 00:21:28.470 "ffdhe3072", 00:21:28.470 "ffdhe4096", 00:21:28.470 "ffdhe6144", 00:21:28.470 "ffdhe8192" 00:21:28.470 ] 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "bdev_nvme_set_hotplug", 00:21:28.470 "params": { 00:21:28.470 "period_us": 100000, 00:21:28.470 "enable": false 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "bdev_malloc_create", 00:21:28.470 "params": { 00:21:28.470 "name": "malloc0", 00:21:28.470 "num_blocks": 8192, 00:21:28.470 "block_size": 4096, 00:21:28.470 "physical_block_size": 4096, 00:21:28.470 "uuid": "c6d20d74-42fe-4558-ad84-f794e2a68a18", 00:21:28.470 "optimal_io_boundary": 0 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "bdev_wait_for_examine" 00:21:28.470 } 00:21:28.470 ] 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "subsystem": "nbd", 00:21:28.470 "config": [] 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "subsystem": "scheduler", 00:21:28.470 "config": [ 00:21:28.470 { 00:21:28.470 "method": "framework_set_scheduler", 00:21:28.470 "params": { 00:21:28.470 "name": "static" 00:21:28.470 } 00:21:28.470 } 00:21:28.470 ] 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "subsystem": "nvmf", 00:21:28.470 "config": [ 00:21:28.470 { 00:21:28.470 "method": "nvmf_set_config", 00:21:28.470 "params": { 00:21:28.470 "discovery_filter": "match_any", 00:21:28.470 "admin_cmd_passthru": { 00:21:28.470 "identify_ctrlr": false 00:21:28.470 } 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_set_max_subsystems", 00:21:28.470 "params": { 00:21:28.470 "max_subsystems": 1024 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_set_crdt", 00:21:28.470 "params": { 00:21:28.470 "crdt1": 0, 00:21:28.470 "crdt2": 0, 00:21:28.470 "crdt3": 0 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_create_transport", 00:21:28.470 "params": { 00:21:28.470 "trtype": "TCP", 00:21:28.470 "max_queue_depth": 128, 00:21:28.470 "max_io_qpairs_per_ctrlr": 127, 00:21:28.470 "in_capsule_data_size": 4096, 00:21:28.470 "max_io_size": 131072, 00:21:28.470 "io_unit_size": 131072, 00:21:28.470 "max_aq_depth": 128, 00:21:28.470 "num_shared_buffers": 511, 00:21:28.470 "buf_cache_size": 4294967295, 00:21:28.470 "dif_insert_or_strip": false, 00:21:28.470 "zcopy": false, 00:21:28.470 "c2h_success": false, 00:21:28.470 "sock_priority": 0, 00:21:28.470 "abort_timeout_sec": 1, 00:21:28.470 "ack_timeout": 0, 00:21:28.470 "data_wr_pool_size": 0 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_create_subsystem", 00:21:28.470 "params": { 00:21:28.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.470 "allow_any_host": false, 00:21:28.470 "serial_number": "SPDK00000000000001", 00:21:28.470 "model_number": "SPDK bdev Controller", 00:21:28.470 "max_namespaces": 10, 00:21:28.470 "min_cntlid": 1, 00:21:28.470 "max_cntlid": 65519, 00:21:28.470 "ana_reporting": false 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_subsystem_add_host", 00:21:28.470 "params": { 00:21:28.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.470 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.470 "psk": "/tmp/tmp.ZAw7DNhBVc" 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_subsystem_add_ns", 00:21:28.470 "params": { 00:21:28.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.470 "namespace": { 00:21:28.470 "nsid": 1, 00:21:28.470 "bdev_name": "malloc0", 00:21:28.470 "nguid": "C6D20D7442FE4558AD84F794E2A68A18", 00:21:28.470 "uuid": "c6d20d74-42fe-4558-ad84-f794e2a68a18", 00:21:28.470 "no_auto_visible": false 00:21:28.470 } 00:21:28.470 } 00:21:28.470 }, 00:21:28.470 { 00:21:28.470 "method": "nvmf_subsystem_add_listener", 00:21:28.470 "params": { 00:21:28.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.470 "listen_address": { 00:21:28.470 "trtype": "TCP", 00:21:28.470 "adrfam": "IPv4", 00:21:28.470 "traddr": "10.0.0.2", 00:21:28.470 "trsvcid": "4420" 00:21:28.470 }, 00:21:28.470 "secure_channel": true 00:21:28.470 } 00:21:28.470 } 00:21:28.470 ] 00:21:28.470 } 00:21:28.470 ] 00:21:28.470 }' 00:21:28.470 12:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:28.730 12:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:28.730 "subsystems": [ 00:21:28.730 { 00:21:28.730 "subsystem": "keyring", 00:21:28.730 "config": [] 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "subsystem": "iobuf", 00:21:28.730 "config": [ 00:21:28.730 { 00:21:28.730 "method": "iobuf_set_options", 00:21:28.730 "params": { 00:21:28.730 "small_pool_count": 8192, 00:21:28.730 "large_pool_count": 1024, 00:21:28.730 "small_bufsize": 8192, 00:21:28.730 "large_bufsize": 135168 00:21:28.730 } 00:21:28.730 } 00:21:28.730 ] 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "subsystem": "sock", 00:21:28.730 "config": [ 00:21:28.730 { 00:21:28.730 "method": "sock_set_default_impl", 00:21:28.730 "params": { 00:21:28.730 "impl_name": "posix" 00:21:28.730 } 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "method": "sock_impl_set_options", 00:21:28.730 "params": { 00:21:28.730 "impl_name": "ssl", 00:21:28.730 "recv_buf_size": 4096, 00:21:28.730 "send_buf_size": 4096, 00:21:28.730 "enable_recv_pipe": true, 00:21:28.730 "enable_quickack": false, 00:21:28.730 "enable_placement_id": 0, 00:21:28.730 "enable_zerocopy_send_server": true, 00:21:28.730 "enable_zerocopy_send_client": false, 00:21:28.730 "zerocopy_threshold": 0, 00:21:28.730 "tls_version": 0, 00:21:28.730 "enable_ktls": false 00:21:28.730 } 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "method": "sock_impl_set_options", 00:21:28.730 "params": { 00:21:28.730 "impl_name": "posix", 00:21:28.730 "recv_buf_size": 2097152, 00:21:28.730 "send_buf_size": 2097152, 00:21:28.730 "enable_recv_pipe": true, 00:21:28.730 "enable_quickack": false, 00:21:28.730 "enable_placement_id": 0, 00:21:28.730 "enable_zerocopy_send_server": true, 00:21:28.730 "enable_zerocopy_send_client": false, 00:21:28.730 "zerocopy_threshold": 0, 00:21:28.730 "tls_version": 0, 00:21:28.730 "enable_ktls": false 00:21:28.730 } 00:21:28.730 } 00:21:28.730 ] 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "subsystem": "vmd", 00:21:28.730 "config": [] 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "subsystem": "accel", 00:21:28.730 "config": [ 00:21:28.730 { 00:21:28.730 "method": "accel_set_options", 00:21:28.730 "params": { 00:21:28.730 "small_cache_size": 128, 00:21:28.730 "large_cache_size": 16, 00:21:28.730 "task_count": 2048, 00:21:28.730 "sequence_count": 2048, 00:21:28.730 "buf_count": 2048 00:21:28.730 } 00:21:28.730 } 00:21:28.730 ] 00:21:28.730 }, 00:21:28.730 { 00:21:28.730 "subsystem": "bdev", 00:21:28.730 "config": [ 00:21:28.731 { 00:21:28.731 "method": "bdev_set_options", 00:21:28.731 "params": { 00:21:28.731 "bdev_io_pool_size": 65535, 00:21:28.731 "bdev_io_cache_size": 256, 00:21:28.731 "bdev_auto_examine": true, 00:21:28.731 "iobuf_small_cache_size": 128, 00:21:28.731 "iobuf_large_cache_size": 16 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_raid_set_options", 00:21:28.731 "params": { 00:21:28.731 "process_window_size_kb": 1024 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_iscsi_set_options", 00:21:28.731 "params": { 00:21:28.731 "timeout_sec": 30 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_nvme_set_options", 00:21:28.731 "params": { 00:21:28.731 "action_on_timeout": "none", 00:21:28.731 "timeout_us": 0, 00:21:28.731 "timeout_admin_us": 0, 00:21:28.731 "keep_alive_timeout_ms": 10000, 00:21:28.731 "arbitration_burst": 0, 00:21:28.731 "low_priority_weight": 0, 00:21:28.731 "medium_priority_weight": 0, 00:21:28.731 "high_priority_weight": 0, 00:21:28.731 "nvme_adminq_poll_period_us": 10000, 00:21:28.731 "nvme_ioq_poll_period_us": 0, 00:21:28.731 "io_queue_requests": 512, 00:21:28.731 "delay_cmd_submit": true, 00:21:28.731 "transport_retry_count": 4, 00:21:28.731 "bdev_retry_count": 3, 00:21:28.731 "transport_ack_timeout": 0, 00:21:28.731 "ctrlr_loss_timeout_sec": 0, 00:21:28.731 "reconnect_delay_sec": 0, 00:21:28.731 "fast_io_fail_timeout_sec": 0, 00:21:28.731 "disable_auto_failback": false, 00:21:28.731 "generate_uuids": false, 00:21:28.731 "transport_tos": 0, 00:21:28.731 "nvme_error_stat": false, 00:21:28.731 "rdma_srq_size": 0, 00:21:28.731 "io_path_stat": false, 00:21:28.731 "allow_accel_sequence": false, 00:21:28.731 "rdma_max_cq_size": 0, 00:21:28.731 "rdma_cm_event_timeout_ms": 0, 00:21:28.731 "dhchap_digests": [ 00:21:28.731 "sha256", 00:21:28.731 "sha384", 00:21:28.731 "sha512" 00:21:28.731 ], 00:21:28.731 "dhchap_dhgroups": [ 00:21:28.731 "null", 00:21:28.731 "ffdhe2048", 00:21:28.731 "ffdhe3072", 00:21:28.731 "ffdhe4096", 00:21:28.731 "ffdhe6144", 00:21:28.731 "ffdhe8192" 00:21:28.731 ] 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_nvme_attach_controller", 00:21:28.731 "params": { 00:21:28.731 "name": "TLSTEST", 00:21:28.731 "trtype": "TCP", 00:21:28.731 "adrfam": "IPv4", 00:21:28.731 "traddr": "10.0.0.2", 00:21:28.731 "trsvcid": "4420", 00:21:28.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.731 "prchk_reftag": false, 00:21:28.731 "prchk_guard": false, 00:21:28.731 "ctrlr_loss_timeout_sec": 0, 00:21:28.731 "reconnect_delay_sec": 0, 00:21:28.731 "fast_io_fail_timeout_sec": 0, 00:21:28.731 "psk": "/tmp/tmp.ZAw7DNhBVc", 00:21:28.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.731 "hdgst": false, 00:21:28.731 "ddgst": false 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_nvme_set_hotplug", 00:21:28.731 "params": { 00:21:28.731 "period_us": 100000, 00:21:28.731 "enable": false 00:21:28.731 } 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "method": "bdev_wait_for_examine" 00:21:28.731 } 00:21:28.731 ] 00:21:28.731 }, 00:21:28.731 { 00:21:28.731 "subsystem": "nbd", 00:21:28.731 "config": [] 00:21:28.731 } 00:21:28.731 ] 00:21:28.731 }' 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3980097 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3980097 ']' 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3980097 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3980097 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3980097' 00:21:28.731 killing process with pid 3980097 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3980097 00:21:28.731 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.731 00:21:28.731 Latency(us) 00:21:28.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.731 =================================================================================================================== 00:21:28.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.731 [2024-07-15 12:51:20.504888] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:28.731 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3980097 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3979680 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3979680 ']' 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3979680 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3979680 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3979680' 00:21:28.991 killing process with pid 3979680 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3979680 00:21:28.991 [2024-07-15 12:51:20.911796] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.991 12:51:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3979680 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:29.250 "subsystems": [ 00:21:29.250 { 00:21:29.250 "subsystem": "keyring", 00:21:29.250 "config": [] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "iobuf", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "iobuf_set_options", 00:21:29.250 "params": { 00:21:29.250 "small_pool_count": 8192, 00:21:29.250 "large_pool_count": 1024, 00:21:29.250 "small_bufsize": 8192, 00:21:29.250 "large_bufsize": 135168 00:21:29.250 } 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "sock", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "sock_set_default_impl", 00:21:29.250 "params": { 00:21:29.250 "impl_name": "posix" 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "sock_impl_set_options", 00:21:29.250 "params": { 00:21:29.250 "impl_name": "ssl", 00:21:29.250 "recv_buf_size": 4096, 00:21:29.250 "send_buf_size": 4096, 00:21:29.250 "enable_recv_pipe": true, 00:21:29.250 "enable_quickack": false, 00:21:29.250 "enable_placement_id": 0, 00:21:29.250 "enable_zerocopy_send_server": true, 00:21:29.250 "enable_zerocopy_send_client": false, 00:21:29.250 "zerocopy_threshold": 0, 00:21:29.250 "tls_version": 0, 00:21:29.250 "enable_ktls": false 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "sock_impl_set_options", 00:21:29.250 "params": { 00:21:29.250 "impl_name": "posix", 00:21:29.250 "recv_buf_size": 2097152, 00:21:29.250 "send_buf_size": 2097152, 00:21:29.250 "enable_recv_pipe": true, 00:21:29.250 "enable_quickack": false, 00:21:29.250 "enable_placement_id": 0, 00:21:29.250 "enable_zerocopy_send_server": true, 00:21:29.250 "enable_zerocopy_send_client": false, 00:21:29.250 "zerocopy_threshold": 0, 00:21:29.250 "tls_version": 0, 00:21:29.250 "enable_ktls": false 00:21:29.250 } 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "vmd", 00:21:29.250 "config": [] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "accel", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "accel_set_options", 00:21:29.250 "params": { 00:21:29.250 "small_cache_size": 128, 00:21:29.250 "large_cache_size": 16, 00:21:29.250 "task_count": 2048, 00:21:29.250 "sequence_count": 2048, 00:21:29.250 "buf_count": 2048 00:21:29.250 } 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "bdev", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "bdev_set_options", 00:21:29.250 "params": { 00:21:29.250 "bdev_io_pool_size": 65535, 00:21:29.250 "bdev_io_cache_size": 256, 00:21:29.250 "bdev_auto_examine": true, 00:21:29.250 "iobuf_small_cache_size": 128, 00:21:29.250 "iobuf_large_cache_size": 16 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_raid_set_options", 00:21:29.250 "params": { 00:21:29.250 "process_window_size_kb": 1024 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_iscsi_set_options", 00:21:29.250 "params": { 00:21:29.250 "timeout_sec": 30 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_nvme_set_options", 00:21:29.250 "params": { 00:21:29.250 "action_on_timeout": "none", 00:21:29.250 "timeout_us": 0, 00:21:29.250 "timeout_admin_us": 0, 00:21:29.250 "keep_alive_timeout_ms": 10000, 00:21:29.250 "arbitration_burst": 0, 00:21:29.250 "low_priority_weight": 0, 00:21:29.250 "medium_priority_weight": 0, 00:21:29.250 "high_priority_weight": 0, 00:21:29.250 "nvme_adminq_poll_period_us": 10000, 00:21:29.250 "nvme_ioq_poll_period_us": 0, 00:21:29.250 "io_queue_requests": 0, 00:21:29.250 "delay_cmd_submit": true, 00:21:29.250 "transport_retry_count": 4, 00:21:29.250 "bdev_retry_count": 3, 00:21:29.250 "transport_ack_timeout": 0, 00:21:29.250 "ctrlr_loss_timeout_sec": 0, 00:21:29.250 "reconnect_delay_sec": 0, 00:21:29.250 "fast_io_fail_timeout_sec": 0, 00:21:29.250 "disable_auto_failback": false, 00:21:29.250 "generate_uuids": false, 00:21:29.250 "transport_tos": 0, 00:21:29.250 "nvme_error_stat": false, 00:21:29.250 "rdma_srq_size": 0, 00:21:29.250 "io_path_stat": false, 00:21:29.250 "allow_accel_sequence": false, 00:21:29.250 "rdma_max_cq_size": 0, 00:21:29.250 "rdma_cm_event_timeout_ms": 0, 00:21:29.250 "dhchap_digests": [ 00:21:29.250 "sha256", 00:21:29.250 "sha384", 00:21:29.250 "sha512" 00:21:29.250 ], 00:21:29.250 "dhchap_dhgroups": [ 00:21:29.250 "null", 00:21:29.250 "ffdhe2048", 00:21:29.250 "ffdhe3072", 00:21:29.250 "ffdhe4096", 00:21:29.250 "ffdhe6144", 00:21:29.250 "ffdhe8192" 00:21:29.250 ] 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_nvme_set_hotplug", 00:21:29.250 "params": { 00:21:29.250 "period_us": 100000, 00:21:29.250 "enable": false 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_malloc_create", 00:21:29.250 "params": { 00:21:29.250 "name": "malloc0", 00:21:29.250 "num_blocks": 8192, 00:21:29.250 "block_size": 4096, 00:21:29.250 "physical_block_size": 4096, 00:21:29.250 "uuid": "c6d20d74-42fe-4558-ad84-f794e2a68a18", 00:21:29.250 "optimal_io_boundary": 0 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "bdev_wait_for_examine" 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "nbd", 00:21:29.250 "config": [] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "scheduler", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "framework_set_scheduler", 00:21:29.250 "params": { 00:21:29.250 "name": "static" 00:21:29.250 } 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "subsystem": "nvmf", 00:21:29.250 "config": [ 00:21:29.250 { 00:21:29.250 "method": "nvmf_set_config", 00:21:29.250 "params": { 00:21:29.250 "discovery_filter": "match_any", 00:21:29.250 "admin_cmd_passthru": { 00:21:29.250 "identify_ctrlr": false 00:21:29.250 } 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_set_max_subsystems", 00:21:29.250 "params": { 00:21:29.250 "max_subsystems": 1024 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_set_crdt", 00:21:29.250 "params": { 00:21:29.250 "crdt1": 0, 00:21:29.250 "crdt2": 0, 00:21:29.250 "crdt3": 0 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_create_transport", 00:21:29.250 "params": { 00:21:29.250 "trtype": "TCP", 00:21:29.250 "max_queue_depth": 128, 00:21:29.250 "max_io_qpairs_per_ctrlr": 127, 00:21:29.250 "in_capsule_data_size": 4096, 00:21:29.250 "max_io_size": 131072, 00:21:29.250 "io_unit_size": 131072, 00:21:29.250 "max_aq_depth": 128, 00:21:29.250 "num_shared_buffers": 511, 00:21:29.250 "buf_cache_size": 4294967295, 00:21:29.250 "dif_insert_or_strip": false, 00:21:29.250 "zcopy": false, 00:21:29.250 "c2h_success": false, 00:21:29.250 "sock_priority": 0, 00:21:29.250 "abort_timeout_sec": 1, 00:21:29.250 "ack_timeout": 0, 00:21:29.250 "data_wr_pool_size": 0 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_create_subsystem", 00:21:29.250 "params": { 00:21:29.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.250 "allow_any_host": false, 00:21:29.250 "serial_number": "SPDK00000000000001", 00:21:29.250 "model_number": "SPDK bdev Controller", 00:21:29.250 "max_namespaces": 10, 00:21:29.250 "min_cntlid": 1, 00:21:29.250 "max_cntlid": 65519, 00:21:29.250 "ana_reporting": false 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_subsystem_add_host", 00:21:29.250 "params": { 00:21:29.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.250 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.250 "psk": "/tmp/tmp.ZAw7DNhBVc" 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_subsystem_add_ns", 00:21:29.250 "params": { 00:21:29.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.250 "namespace": { 00:21:29.250 "nsid": 1, 00:21:29.250 "bdev_name": "malloc0", 00:21:29.250 "nguid": "C6D20D7442FE4558AD84F794E2A68A18", 00:21:29.250 "uuid": "c6d20d74-42fe-4558-ad84-f794e2a68a18", 00:21:29.250 "no_auto_visible": false 00:21:29.250 } 00:21:29.250 } 00:21:29.250 }, 00:21:29.250 { 00:21:29.250 "method": "nvmf_subsystem_add_listener", 00:21:29.250 "params": { 00:21:29.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.250 "listen_address": { 00:21:29.250 "trtype": "TCP", 00:21:29.250 "adrfam": "IPv4", 00:21:29.250 "traddr": "10.0.0.2", 00:21:29.250 "trsvcid": "4420" 00:21:29.250 }, 00:21:29.250 "secure_channel": true 00:21:29.250 } 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 } 00:21:29.250 ] 00:21:29.250 }' 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3980637 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3980637 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3980637 ']' 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.250 12:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.509 [2024-07-15 12:51:21.240402] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:29.509 [2024-07-15 12:51:21.240459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.509 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.509 [2024-07-15 12:51:21.324618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.509 [2024-07-15 12:51:21.426903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.509 [2024-07-15 12:51:21.426952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.509 [2024-07-15 12:51:21.426965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.509 [2024-07-15 12:51:21.426976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.509 [2024-07-15 12:51:21.426986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.509 [2024-07-15 12:51:21.427057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.768 [2024-07-15 12:51:21.645825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.768 [2024-07-15 12:51:21.661735] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:29.768 [2024-07-15 12:51:21.677804] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.768 [2024-07-15 12:51:21.687538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3980869 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3980869 /var/tmp/bdevperf.sock 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3980869 ']' 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.335 12:51:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:30.335 "subsystems": [ 00:21:30.335 { 00:21:30.335 "subsystem": "keyring", 00:21:30.335 "config": [] 00:21:30.335 }, 00:21:30.335 { 00:21:30.335 "subsystem": "iobuf", 00:21:30.335 "config": [ 00:21:30.335 { 00:21:30.335 "method": "iobuf_set_options", 00:21:30.335 "params": { 00:21:30.335 "small_pool_count": 8192, 00:21:30.335 "large_pool_count": 1024, 00:21:30.335 "small_bufsize": 8192, 00:21:30.335 "large_bufsize": 135168 00:21:30.335 } 00:21:30.335 } 00:21:30.335 ] 00:21:30.335 }, 00:21:30.335 { 00:21:30.335 "subsystem": "sock", 00:21:30.335 "config": [ 00:21:30.335 { 00:21:30.335 "method": "sock_set_default_impl", 00:21:30.335 "params": { 00:21:30.335 "impl_name": "posix" 00:21:30.335 } 00:21:30.335 }, 00:21:30.335 { 00:21:30.335 "method": "sock_impl_set_options", 00:21:30.335 "params": { 00:21:30.335 "impl_name": "ssl", 00:21:30.335 "recv_buf_size": 4096, 00:21:30.335 "send_buf_size": 4096, 00:21:30.335 "enable_recv_pipe": true, 00:21:30.335 "enable_quickack": false, 00:21:30.335 "enable_placement_id": 0, 00:21:30.335 "enable_zerocopy_send_server": true, 00:21:30.335 "enable_zerocopy_send_client": false, 00:21:30.335 "zerocopy_threshold": 0, 00:21:30.335 "tls_version": 0, 00:21:30.335 "enable_ktls": false 00:21:30.335 } 00:21:30.335 }, 00:21:30.335 { 00:21:30.335 "method": "sock_impl_set_options", 00:21:30.335 "params": { 00:21:30.335 "impl_name": "posix", 00:21:30.335 "recv_buf_size": 2097152, 00:21:30.335 "send_buf_size": 2097152, 00:21:30.335 "enable_recv_pipe": true, 00:21:30.335 "enable_quickack": false, 00:21:30.335 "enable_placement_id": 0, 00:21:30.335 "enable_zerocopy_send_server": true, 00:21:30.335 "enable_zerocopy_send_client": false, 00:21:30.335 "zerocopy_threshold": 0, 00:21:30.335 "tls_version": 0, 00:21:30.335 "enable_ktls": false 00:21:30.335 } 00:21:30.335 } 00:21:30.335 ] 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "subsystem": "vmd", 00:21:30.336 "config": [] 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "subsystem": "accel", 00:21:30.336 "config": [ 00:21:30.336 { 00:21:30.336 "method": "accel_set_options", 00:21:30.336 "params": { 00:21:30.336 "small_cache_size": 128, 00:21:30.336 "large_cache_size": 16, 00:21:30.336 "task_count": 2048, 00:21:30.336 "sequence_count": 2048, 00:21:30.336 "buf_count": 2048 00:21:30.336 } 00:21:30.336 } 00:21:30.336 ] 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "subsystem": "bdev", 00:21:30.336 "config": [ 00:21:30.336 { 00:21:30.336 "method": "bdev_set_options", 00:21:30.336 "params": { 00:21:30.336 "bdev_io_pool_size": 65535, 00:21:30.336 "bdev_io_cache_size": 256, 00:21:30.336 "bdev_auto_examine": true, 00:21:30.336 "iobuf_small_cache_size": 128, 00:21:30.336 "iobuf_large_cache_size": 16 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_raid_set_options", 00:21:30.336 "params": { 00:21:30.336 "process_window_size_kb": 1024 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_iscsi_set_options", 00:21:30.336 "params": { 00:21:30.336 "timeout_sec": 30 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_nvme_set_options", 00:21:30.336 "params": { 00:21:30.336 "action_on_timeout": "none", 00:21:30.336 "timeout_us": 0, 00:21:30.336 "timeout_admin_us": 0, 00:21:30.336 "keep_alive_timeout_ms": 10000, 00:21:30.336 "arbitration_burst": 0, 00:21:30.336 "low_priority_weight": 0, 00:21:30.336 "medium_priority_weight": 0, 00:21:30.336 "high_priority_weight": 0, 00:21:30.336 "nvme_adminq_poll_period_us": 10000, 00:21:30.336 "nvme_ioq_poll_period_us": 0, 00:21:30.336 "io_queue_requests": 512, 00:21:30.336 "delay_cmd_submit": true, 00:21:30.336 "transport_retry_count": 4, 00:21:30.336 "bdev_retry_count": 3, 00:21:30.336 "transport_ack_timeout": 0, 00:21:30.336 "ctrlr_loss_timeout_sec": 0, 00:21:30.336 "reconnect_delay_sec": 0, 00:21:30.336 "fast_io_fail_timeout_sec": 0, 00:21:30.336 "disable_auto_failback": false, 00:21:30.336 "generate_uuids": false, 00:21:30.336 "transport_tos": 0, 00:21:30.336 "nvme_error_stat": false, 00:21:30.336 "rdma_srq_size": 0, 00:21:30.336 "io_path_stat": false, 00:21:30.336 "allow_accel_sequence": false, 00:21:30.336 "rdma_max_cq_size": 0, 00:21:30.336 "rdma_cm_event_timeout_ms": 0, 00:21:30.336 "dhchap_digests": [ 00:21:30.336 "sha256", 00:21:30.336 "sha384", 00:21:30.336 "sha512" 00:21:30.336 ], 00:21:30.336 "dhchap_dhgroups": [ 00:21:30.336 "null", 00:21:30.336 "ffdhe2048", 00:21:30.336 "ffdhe3072", 00:21:30.336 "ffdhe4096", 00:21:30.336 "ffdhe6144", 00:21:30.336 "ffdhe8192" 00:21:30.336 ] 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_nvme_attach_controller", 00:21:30.336 "params": { 00:21:30.336 "name": "TLSTEST", 00:21:30.336 "trtype": "TCP", 00:21:30.336 "adrfam": "IPv4", 00:21:30.336 "traddr": "10.0.0.2", 00:21:30.336 "trsvcid": "4420", 00:21:30.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.336 "prchk_reftag": false, 00:21:30.336 "prchk_guard": false, 00:21:30.336 "ctrlr_loss_timeout_sec": 0, 00:21:30.336 "reconnect_delay_sec": 0, 00:21:30.336 "fast_io_fail_timeout_sec": 0, 00:21:30.336 "psk": "/tmp/tmp.ZAw7DNhBVc", 00:21:30.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.336 "hdgst": false, 00:21:30.336 "ddgst": false 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_nvme_set_hotplug", 00:21:30.336 "params": { 00:21:30.336 "period_us": 100000, 00:21:30.336 "enable": false 00:21:30.336 } 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "method": "bdev_wait_for_examine" 00:21:30.336 } 00:21:30.336 ] 00:21:30.336 }, 00:21:30.336 { 00:21:30.336 "subsystem": "nbd", 00:21:30.336 "config": [] 00:21:30.336 } 00:21:30.336 ] 00:21:30.336 }' 00:21:30.336 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.336 12:51:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.336 [2024-07-15 12:51:22.271020] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:30.336 [2024-07-15 12:51:22.271084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980869 ] 00:21:30.595 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.595 [2024-07-15 12:51:22.383773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.595 [2024-07-15 12:51:22.535697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.854 [2024-07-15 12:51:22.744503] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.854 [2024-07-15 12:51:22.744671] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.423 12:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.423 12:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.423 12:51:23 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.423 Running I/O for 10 seconds... 00:21:41.396 00:21:41.396 Latency(us) 00:21:41.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.396 Verification LBA range: start 0x0 length 0x2000 00:21:41.396 TLSTESTn1 : 10.02 2826.22 11.04 0.00 0.00 45168.91 10843.23 48854.11 00:21:41.396 =================================================================================================================== 00:21:41.396 Total : 2826.22 11.04 0.00 0.00 45168.91 10843.23 48854.11 00:21:41.396 0 00:21:41.396 12:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.396 12:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3980869 00:21:41.396 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3980869 ']' 00:21:41.396 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3980869 00:21:41.396 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3980869 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3980869' 00:21:41.655 killing process with pid 3980869 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3980869 00:21:41.655 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.655 00:21:41.655 Latency(us) 00:21:41.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.655 =================================================================================================================== 00:21:41.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.655 [2024-07-15 12:51:33.381176] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.655 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3980869 00:21:41.914 12:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3980637 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3980637 ']' 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3980637 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3980637 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3980637' 00:21:41.915 killing process with pid 3980637 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3980637 00:21:41.915 [2024-07-15 12:51:33.764214] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:41.915 12:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3980637 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3982823 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3982823 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3982823 ']' 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.174 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.433 [2024-07-15 12:51:34.143150] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:42.434 [2024-07-15 12:51:34.143214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.434 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.434 [2024-07-15 12:51:34.230279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.434 [2024-07-15 12:51:34.319247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.434 [2024-07-15 12:51:34.319297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.434 [2024-07-15 12:51:34.319307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.434 [2024-07-15 12:51:34.319316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.434 [2024-07-15 12:51:34.319324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.434 [2024-07-15 12:51:34.319346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ZAw7DNhBVc 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZAw7DNhBVc 00:21:42.693 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:42.952 [2024-07-15 12:51:34.684287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.952 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:43.211 12:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:43.471 [2024-07-15 12:51:35.169565] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.471 [2024-07-15 12:51:35.169770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.471 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.729 malloc0 00:21:43.729 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.729 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZAw7DNhBVc 00:21:43.988 [2024-07-15 12:51:35.880912] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3983284 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3983284 /var/tmp/bdevperf.sock 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3983284 ']' 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.988 12:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.247 [2024-07-15 12:51:35.945735] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:44.247 [2024-07-15 12:51:35.945792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983284 ] 00:21:44.247 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.247 [2024-07-15 12:51:36.026499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.247 [2024-07-15 12:51:36.130641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.183 12:51:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.183 12:51:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:45.183 12:51:36 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZAw7DNhBVc 00:21:45.183 12:51:37 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:45.441 [2024-07-15 12:51:37.335521] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.699 nvme0n1 00:21:45.700 12:51:37 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.700 Running I/O for 1 seconds... 00:21:46.635 00:21:46.635 Latency(us) 00:21:46.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.635 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:46.635 Verification LBA range: start 0x0 length 0x2000 00:21:46.635 nvme0n1 : 1.02 3671.42 14.34 0.00 0.00 34515.19 7804.74 34555.35 00:21:46.635 =================================================================================================================== 00:21:46.635 Total : 3671.42 14.34 0.00 0.00 34515.19 7804.74 34555.35 00:21:46.635 0 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3983284 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3983284 ']' 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3983284 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3983284 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3983284' 00:21:46.894 killing process with pid 3983284 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3983284 00:21:46.894 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.894 00:21:46.894 Latency(us) 00:21:46.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.894 =================================================================================================================== 00:21:46.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.894 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3983284 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3982823 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3982823 ']' 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3982823 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3982823 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3982823' 00:21:47.153 killing process with pid 3982823 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3982823 00:21:47.153 [2024-07-15 12:51:38.910453] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:47.153 12:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3982823 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3983848 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3983848 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3983848 ']' 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.411 12:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.411 [2024-07-15 12:51:39.184499] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:47.411 [2024-07-15 12:51:39.184557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.411 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.411 [2024-07-15 12:51:39.270523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.670 [2024-07-15 12:51:39.359194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.670 [2024-07-15 12:51:39.359236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.670 [2024-07-15 12:51:39.359246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.670 [2024-07-15 12:51:39.359261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.670 [2024-07-15 12:51:39.359269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.670 [2024-07-15 12:51:39.359298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.238 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.238 [2024-07-15 12:51:40.165324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.496 malloc0 00:21:48.496 [2024-07-15 12:51:40.194531] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.496 [2024-07-15 12:51:40.194733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3984023 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3984023 /var/tmp/bdevperf.sock 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3984023 ']' 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.496 12:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.496 [2024-07-15 12:51:40.273219] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:48.496 [2024-07-15 12:51:40.273277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984023 ] 00:21:48.496 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.496 [2024-07-15 12:51:40.353458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.755 [2024-07-15 12:51:40.457152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.323 12:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.323 12:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:49.323 12:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZAw7DNhBVc 00:21:49.582 12:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:49.842 [2024-07-15 12:51:41.709120] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.100 nvme0n1 00:21:50.100 12:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.100 Running I/O for 1 seconds... 00:21:51.036 00:21:51.036 Latency(us) 00:21:51.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.036 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:51.036 Verification LBA range: start 0x0 length 0x2000 00:21:51.036 nvme0n1 : 1.02 3734.41 14.59 0.00 0.00 33899.46 9115.46 35031.97 00:21:51.036 =================================================================================================================== 00:21:51.036 Total : 3734.41 14.59 0.00 0.00 33899.46 9115.46 35031.97 00:21:51.036 0 00:21:51.036 12:51:42 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:51.036 12:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.036 12:51:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.296 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.296 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:51.296 "subsystems": [ 00:21:51.296 { 00:21:51.296 "subsystem": "keyring", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "keyring_file_add_key", 00:21:51.296 "params": { 00:21:51.296 "name": "key0", 00:21:51.296 "path": "/tmp/tmp.ZAw7DNhBVc" 00:21:51.296 } 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "iobuf", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "iobuf_set_options", 00:21:51.296 "params": { 00:21:51.296 "small_pool_count": 8192, 00:21:51.296 "large_pool_count": 1024, 00:21:51.296 "small_bufsize": 8192, 00:21:51.296 "large_bufsize": 135168 00:21:51.296 } 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "sock", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "sock_set_default_impl", 00:21:51.296 "params": { 00:21:51.296 "impl_name": "posix" 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "sock_impl_set_options", 00:21:51.296 "params": { 00:21:51.296 "impl_name": "ssl", 00:21:51.296 "recv_buf_size": 4096, 00:21:51.296 "send_buf_size": 4096, 00:21:51.296 "enable_recv_pipe": true, 00:21:51.296 "enable_quickack": false, 00:21:51.296 "enable_placement_id": 0, 00:21:51.296 "enable_zerocopy_send_server": true, 00:21:51.296 "enable_zerocopy_send_client": false, 00:21:51.296 "zerocopy_threshold": 0, 00:21:51.296 "tls_version": 0, 00:21:51.296 "enable_ktls": false 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "sock_impl_set_options", 00:21:51.296 "params": { 00:21:51.296 "impl_name": "posix", 00:21:51.296 "recv_buf_size": 2097152, 00:21:51.296 "send_buf_size": 2097152, 00:21:51.296 "enable_recv_pipe": true, 00:21:51.296 "enable_quickack": false, 00:21:51.296 "enable_placement_id": 0, 00:21:51.296 "enable_zerocopy_send_server": true, 00:21:51.296 "enable_zerocopy_send_client": false, 00:21:51.296 "zerocopy_threshold": 0, 00:21:51.296 "tls_version": 0, 00:21:51.296 "enable_ktls": false 00:21:51.296 } 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "vmd", 00:21:51.296 "config": [] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "accel", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "accel_set_options", 00:21:51.296 "params": { 00:21:51.296 "small_cache_size": 128, 00:21:51.296 "large_cache_size": 16, 00:21:51.296 "task_count": 2048, 00:21:51.296 "sequence_count": 2048, 00:21:51.296 "buf_count": 2048 00:21:51.296 } 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "bdev", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "bdev_set_options", 00:21:51.296 "params": { 00:21:51.296 "bdev_io_pool_size": 65535, 00:21:51.296 "bdev_io_cache_size": 256, 00:21:51.296 "bdev_auto_examine": true, 00:21:51.296 "iobuf_small_cache_size": 128, 00:21:51.296 "iobuf_large_cache_size": 16 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_raid_set_options", 00:21:51.296 "params": { 00:21:51.296 "process_window_size_kb": 1024 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_iscsi_set_options", 00:21:51.296 "params": { 00:21:51.296 "timeout_sec": 30 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_nvme_set_options", 00:21:51.296 "params": { 00:21:51.296 "action_on_timeout": "none", 00:21:51.296 "timeout_us": 0, 00:21:51.296 "timeout_admin_us": 0, 00:21:51.296 "keep_alive_timeout_ms": 10000, 00:21:51.296 "arbitration_burst": 0, 00:21:51.296 "low_priority_weight": 0, 00:21:51.296 "medium_priority_weight": 0, 00:21:51.296 "high_priority_weight": 0, 00:21:51.296 "nvme_adminq_poll_period_us": 10000, 00:21:51.296 "nvme_ioq_poll_period_us": 0, 00:21:51.296 "io_queue_requests": 0, 00:21:51.296 "delay_cmd_submit": true, 00:21:51.296 "transport_retry_count": 4, 00:21:51.296 "bdev_retry_count": 3, 00:21:51.296 "transport_ack_timeout": 0, 00:21:51.296 "ctrlr_loss_timeout_sec": 0, 00:21:51.296 "reconnect_delay_sec": 0, 00:21:51.296 "fast_io_fail_timeout_sec": 0, 00:21:51.296 "disable_auto_failback": false, 00:21:51.296 "generate_uuids": false, 00:21:51.296 "transport_tos": 0, 00:21:51.296 "nvme_error_stat": false, 00:21:51.296 "rdma_srq_size": 0, 00:21:51.296 "io_path_stat": false, 00:21:51.296 "allow_accel_sequence": false, 00:21:51.296 "rdma_max_cq_size": 0, 00:21:51.296 "rdma_cm_event_timeout_ms": 0, 00:21:51.296 "dhchap_digests": [ 00:21:51.296 "sha256", 00:21:51.296 "sha384", 00:21:51.296 "sha512" 00:21:51.296 ], 00:21:51.296 "dhchap_dhgroups": [ 00:21:51.296 "null", 00:21:51.296 "ffdhe2048", 00:21:51.296 "ffdhe3072", 00:21:51.296 "ffdhe4096", 00:21:51.296 "ffdhe6144", 00:21:51.296 "ffdhe8192" 00:21:51.296 ] 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_nvme_set_hotplug", 00:21:51.296 "params": { 00:21:51.296 "period_us": 100000, 00:21:51.296 "enable": false 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_malloc_create", 00:21:51.296 "params": { 00:21:51.296 "name": "malloc0", 00:21:51.296 "num_blocks": 8192, 00:21:51.296 "block_size": 4096, 00:21:51.296 "physical_block_size": 4096, 00:21:51.296 "uuid": "c71ad4ae-a1cd-4dc3-9beb-7cf964a5e5b6", 00:21:51.296 "optimal_io_boundary": 0 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "bdev_wait_for_examine" 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "nbd", 00:21:51.296 "config": [] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "scheduler", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "framework_set_scheduler", 00:21:51.296 "params": { 00:21:51.296 "name": "static" 00:21:51.296 } 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "subsystem": "nvmf", 00:21:51.296 "config": [ 00:21:51.296 { 00:21:51.296 "method": "nvmf_set_config", 00:21:51.296 "params": { 00:21:51.296 "discovery_filter": "match_any", 00:21:51.296 "admin_cmd_passthru": { 00:21:51.296 "identify_ctrlr": false 00:21:51.296 } 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_set_max_subsystems", 00:21:51.296 "params": { 00:21:51.296 "max_subsystems": 1024 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_set_crdt", 00:21:51.296 "params": { 00:21:51.296 "crdt1": 0, 00:21:51.296 "crdt2": 0, 00:21:51.296 "crdt3": 0 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_create_transport", 00:21:51.296 "params": { 00:21:51.296 "trtype": "TCP", 00:21:51.296 "max_queue_depth": 128, 00:21:51.296 "max_io_qpairs_per_ctrlr": 127, 00:21:51.296 "in_capsule_data_size": 4096, 00:21:51.296 "max_io_size": 131072, 00:21:51.296 "io_unit_size": 131072, 00:21:51.296 "max_aq_depth": 128, 00:21:51.296 "num_shared_buffers": 511, 00:21:51.296 "buf_cache_size": 4294967295, 00:21:51.296 "dif_insert_or_strip": false, 00:21:51.296 "zcopy": false, 00:21:51.296 "c2h_success": false, 00:21:51.296 "sock_priority": 0, 00:21:51.296 "abort_timeout_sec": 1, 00:21:51.296 "ack_timeout": 0, 00:21:51.296 "data_wr_pool_size": 0 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_create_subsystem", 00:21:51.296 "params": { 00:21:51.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.296 "allow_any_host": false, 00:21:51.296 "serial_number": "00000000000000000000", 00:21:51.296 "model_number": "SPDK bdev Controller", 00:21:51.296 "max_namespaces": 32, 00:21:51.296 "min_cntlid": 1, 00:21:51.296 "max_cntlid": 65519, 00:21:51.296 "ana_reporting": false 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_subsystem_add_host", 00:21:51.296 "params": { 00:21:51.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.296 "host": "nqn.2016-06.io.spdk:host1", 00:21:51.296 "psk": "key0" 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_subsystem_add_ns", 00:21:51.296 "params": { 00:21:51.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.296 "namespace": { 00:21:51.296 "nsid": 1, 00:21:51.296 "bdev_name": "malloc0", 00:21:51.296 "nguid": "C71AD4AEA1CD4DC39BEB7CF964A5E5B6", 00:21:51.296 "uuid": "c71ad4ae-a1cd-4dc3-9beb-7cf964a5e5b6", 00:21:51.296 "no_auto_visible": false 00:21:51.296 } 00:21:51.296 } 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "method": "nvmf_subsystem_add_listener", 00:21:51.296 "params": { 00:21:51.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.296 "listen_address": { 00:21:51.297 "trtype": "TCP", 00:21:51.297 "adrfam": "IPv4", 00:21:51.297 "traddr": "10.0.0.2", 00:21:51.297 "trsvcid": "4420" 00:21:51.297 }, 00:21:51.297 "secure_channel": true 00:21:51.297 } 00:21:51.297 } 00:21:51.297 ] 00:21:51.297 } 00:21:51.297 ] 00:21:51.297 }' 00:21:51.297 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:51.556 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:51.556 "subsystems": [ 00:21:51.556 { 00:21:51.556 "subsystem": "keyring", 00:21:51.556 "config": [ 00:21:51.556 { 00:21:51.556 "method": "keyring_file_add_key", 00:21:51.556 "params": { 00:21:51.556 "name": "key0", 00:21:51.556 "path": "/tmp/tmp.ZAw7DNhBVc" 00:21:51.556 } 00:21:51.556 } 00:21:51.556 ] 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "subsystem": "iobuf", 00:21:51.556 "config": [ 00:21:51.556 { 00:21:51.556 "method": "iobuf_set_options", 00:21:51.556 "params": { 00:21:51.556 "small_pool_count": 8192, 00:21:51.556 "large_pool_count": 1024, 00:21:51.556 "small_bufsize": 8192, 00:21:51.556 "large_bufsize": 135168 00:21:51.556 } 00:21:51.556 } 00:21:51.556 ] 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "subsystem": "sock", 00:21:51.556 "config": [ 00:21:51.556 { 00:21:51.556 "method": "sock_set_default_impl", 00:21:51.556 "params": { 00:21:51.556 "impl_name": "posix" 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "sock_impl_set_options", 00:21:51.556 "params": { 00:21:51.556 "impl_name": "ssl", 00:21:51.556 "recv_buf_size": 4096, 00:21:51.556 "send_buf_size": 4096, 00:21:51.556 "enable_recv_pipe": true, 00:21:51.556 "enable_quickack": false, 00:21:51.556 "enable_placement_id": 0, 00:21:51.556 "enable_zerocopy_send_server": true, 00:21:51.556 "enable_zerocopy_send_client": false, 00:21:51.556 "zerocopy_threshold": 0, 00:21:51.556 "tls_version": 0, 00:21:51.556 "enable_ktls": false 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "sock_impl_set_options", 00:21:51.556 "params": { 00:21:51.556 "impl_name": "posix", 00:21:51.556 "recv_buf_size": 2097152, 00:21:51.556 "send_buf_size": 2097152, 00:21:51.556 "enable_recv_pipe": true, 00:21:51.556 "enable_quickack": false, 00:21:51.556 "enable_placement_id": 0, 00:21:51.556 "enable_zerocopy_send_server": true, 00:21:51.556 "enable_zerocopy_send_client": false, 00:21:51.556 "zerocopy_threshold": 0, 00:21:51.556 "tls_version": 0, 00:21:51.556 "enable_ktls": false 00:21:51.556 } 00:21:51.556 } 00:21:51.556 ] 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "subsystem": "vmd", 00:21:51.556 "config": [] 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "subsystem": "accel", 00:21:51.556 "config": [ 00:21:51.556 { 00:21:51.556 "method": "accel_set_options", 00:21:51.556 "params": { 00:21:51.556 "small_cache_size": 128, 00:21:51.556 "large_cache_size": 16, 00:21:51.556 "task_count": 2048, 00:21:51.556 "sequence_count": 2048, 00:21:51.556 "buf_count": 2048 00:21:51.556 } 00:21:51.556 } 00:21:51.556 ] 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "subsystem": "bdev", 00:21:51.556 "config": [ 00:21:51.556 { 00:21:51.556 "method": "bdev_set_options", 00:21:51.556 "params": { 00:21:51.556 "bdev_io_pool_size": 65535, 00:21:51.556 "bdev_io_cache_size": 256, 00:21:51.556 "bdev_auto_examine": true, 00:21:51.556 "iobuf_small_cache_size": 128, 00:21:51.556 "iobuf_large_cache_size": 16 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "bdev_raid_set_options", 00:21:51.556 "params": { 00:21:51.556 "process_window_size_kb": 1024 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "bdev_iscsi_set_options", 00:21:51.556 "params": { 00:21:51.556 "timeout_sec": 30 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "bdev_nvme_set_options", 00:21:51.556 "params": { 00:21:51.556 "action_on_timeout": "none", 00:21:51.556 "timeout_us": 0, 00:21:51.556 "timeout_admin_us": 0, 00:21:51.556 "keep_alive_timeout_ms": 10000, 00:21:51.556 "arbitration_burst": 0, 00:21:51.556 "low_priority_weight": 0, 00:21:51.556 "medium_priority_weight": 0, 00:21:51.556 "high_priority_weight": 0, 00:21:51.556 "nvme_adminq_poll_period_us": 10000, 00:21:51.556 "nvme_ioq_poll_period_us": 0, 00:21:51.556 "io_queue_requests": 512, 00:21:51.556 "delay_cmd_submit": true, 00:21:51.556 "transport_retry_count": 4, 00:21:51.556 "bdev_retry_count": 3, 00:21:51.556 "transport_ack_timeout": 0, 00:21:51.556 "ctrlr_loss_timeout_sec": 0, 00:21:51.556 "reconnect_delay_sec": 0, 00:21:51.556 "fast_io_fail_timeout_sec": 0, 00:21:51.556 "disable_auto_failback": false, 00:21:51.556 "generate_uuids": false, 00:21:51.556 "transport_tos": 0, 00:21:51.556 "nvme_error_stat": false, 00:21:51.556 "rdma_srq_size": 0, 00:21:51.556 "io_path_stat": false, 00:21:51.556 "allow_accel_sequence": false, 00:21:51.556 "rdma_max_cq_size": 0, 00:21:51.556 "rdma_cm_event_timeout_ms": 0, 00:21:51.556 "dhchap_digests": [ 00:21:51.556 "sha256", 00:21:51.556 "sha384", 00:21:51.556 "sha512" 00:21:51.556 ], 00:21:51.556 "dhchap_dhgroups": [ 00:21:51.556 "null", 00:21:51.556 "ffdhe2048", 00:21:51.556 "ffdhe3072", 00:21:51.556 "ffdhe4096", 00:21:51.556 "ffdhe6144", 00:21:51.556 "ffdhe8192" 00:21:51.556 ] 00:21:51.556 } 00:21:51.556 }, 00:21:51.556 { 00:21:51.556 "method": "bdev_nvme_attach_controller", 00:21:51.556 "params": { 00:21:51.556 "name": "nvme0", 00:21:51.556 "trtype": "TCP", 00:21:51.556 "adrfam": "IPv4", 00:21:51.556 "traddr": "10.0.0.2", 00:21:51.556 "trsvcid": "4420", 00:21:51.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.557 "prchk_reftag": false, 00:21:51.557 "prchk_guard": false, 00:21:51.557 "ctrlr_loss_timeout_sec": 0, 00:21:51.557 "reconnect_delay_sec": 0, 00:21:51.557 "fast_io_fail_timeout_sec": 0, 00:21:51.557 "psk": "key0", 00:21:51.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.557 "hdgst": false, 00:21:51.557 "ddgst": false 00:21:51.557 } 00:21:51.557 }, 00:21:51.557 { 00:21:51.557 "method": "bdev_nvme_set_hotplug", 00:21:51.557 "params": { 00:21:51.557 "period_us": 100000, 00:21:51.557 "enable": false 00:21:51.557 } 00:21:51.557 }, 00:21:51.557 { 00:21:51.557 "method": "bdev_enable_histogram", 00:21:51.557 "params": { 00:21:51.557 "name": "nvme0n1", 00:21:51.557 "enable": true 00:21:51.557 } 00:21:51.557 }, 00:21:51.557 { 00:21:51.557 "method": "bdev_wait_for_examine" 00:21:51.557 } 00:21:51.557 ] 00:21:51.557 }, 00:21:51.557 { 00:21:51.557 "subsystem": "nbd", 00:21:51.557 "config": [] 00:21:51.557 } 00:21:51.557 ] 00:21:51.557 }' 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3984023 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3984023 ']' 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3984023 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3984023 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3984023' 00:21:51.557 killing process with pid 3984023 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3984023 00:21:51.557 Received shutdown signal, test time was about 1.000000 seconds 00:21:51.557 00:21:51.557 Latency(us) 00:21:51.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.557 =================================================================================================================== 00:21:51.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.557 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3984023 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3983848 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3983848 ']' 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3983848 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3983848 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3983848' 00:21:51.815 killing process with pid 3983848 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3983848 00:21:51.815 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3983848 00:21:52.074 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:52.074 12:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.074 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.074 12:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:52.074 "subsystems": [ 00:21:52.074 { 00:21:52.074 "subsystem": "keyring", 00:21:52.074 "config": [ 00:21:52.074 { 00:21:52.074 "method": "keyring_file_add_key", 00:21:52.074 "params": { 00:21:52.074 "name": "key0", 00:21:52.074 "path": "/tmp/tmp.ZAw7DNhBVc" 00:21:52.074 } 00:21:52.074 } 00:21:52.074 ] 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "subsystem": "iobuf", 00:21:52.074 "config": [ 00:21:52.074 { 00:21:52.074 "method": "iobuf_set_options", 00:21:52.074 "params": { 00:21:52.074 "small_pool_count": 8192, 00:21:52.074 "large_pool_count": 1024, 00:21:52.074 "small_bufsize": 8192, 00:21:52.074 "large_bufsize": 135168 00:21:52.074 } 00:21:52.074 } 00:21:52.074 ] 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "subsystem": "sock", 00:21:52.074 "config": [ 00:21:52.074 { 00:21:52.074 "method": "sock_set_default_impl", 00:21:52.074 "params": { 00:21:52.074 "impl_name": "posix" 00:21:52.074 } 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "method": "sock_impl_set_options", 00:21:52.074 "params": { 00:21:52.074 "impl_name": "ssl", 00:21:52.074 "recv_buf_size": 4096, 00:21:52.074 "send_buf_size": 4096, 00:21:52.074 "enable_recv_pipe": true, 00:21:52.074 "enable_quickack": false, 00:21:52.074 "enable_placement_id": 0, 00:21:52.074 "enable_zerocopy_send_server": true, 00:21:52.074 "enable_zerocopy_send_client": false, 00:21:52.074 "zerocopy_threshold": 0, 00:21:52.074 "tls_version": 0, 00:21:52.074 "enable_ktls": false 00:21:52.074 } 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "method": "sock_impl_set_options", 00:21:52.074 "params": { 00:21:52.074 "impl_name": "posix", 00:21:52.074 "recv_buf_size": 2097152, 00:21:52.074 "send_buf_size": 2097152, 00:21:52.074 "enable_recv_pipe": true, 00:21:52.074 "enable_quickack": false, 00:21:52.074 "enable_placement_id": 0, 00:21:52.074 "enable_zerocopy_send_server": true, 00:21:52.074 "enable_zerocopy_send_client": false, 00:21:52.074 "zerocopy_threshold": 0, 00:21:52.074 "tls_version": 0, 00:21:52.074 "enable_ktls": false 00:21:52.074 } 00:21:52.074 } 00:21:52.074 ] 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "subsystem": "vmd", 00:21:52.074 "config": [] 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "subsystem": "accel", 00:21:52.074 "config": [ 00:21:52.074 { 00:21:52.074 "method": "accel_set_options", 00:21:52.074 "params": { 00:21:52.074 "small_cache_size": 128, 00:21:52.074 "large_cache_size": 16, 00:21:52.074 "task_count": 2048, 00:21:52.075 "sequence_count": 2048, 00:21:52.075 "buf_count": 2048 00:21:52.075 } 00:21:52.075 } 00:21:52.075 ] 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "subsystem": "bdev", 00:21:52.075 "config": [ 00:21:52.075 { 00:21:52.075 "method": "bdev_set_options", 00:21:52.075 "params": { 00:21:52.075 "bdev_io_pool_size": 65535, 00:21:52.075 "bdev_io_cache_size": 256, 00:21:52.075 "bdev_auto_examine": true, 00:21:52.075 "iobuf_small_cache_size": 128, 00:21:52.075 "iobuf_large_cache_size": 16 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_raid_set_options", 00:21:52.075 "params": { 00:21:52.075 "process_window_size_kb": 1024 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_iscsi_set_options", 00:21:52.075 "params": { 00:21:52.075 "timeout_sec": 30 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_nvme_set_options", 00:21:52.075 "params": { 00:21:52.075 "action_on_timeout": "none", 00:21:52.075 "timeout_us": 0, 00:21:52.075 "timeout_admin_us": 0, 00:21:52.075 "keep_alive_timeout_ms": 10000, 00:21:52.075 "arbitration_burst": 0, 00:21:52.075 "low_priority_weight": 0, 00:21:52.075 "medium_priority_weight": 0, 00:21:52.075 "high_priority_weight": 0, 00:21:52.075 "nvme_adminq_poll_period_us": 10000, 00:21:52.075 "nvme_ioq_poll_period_us": 0, 00:21:52.075 "io_queue_requests": 0, 00:21:52.075 "delay_cmd_submit": true, 00:21:52.075 "transport_retry_count": 4, 00:21:52.075 "bdev_retry_count": 3, 00:21:52.075 "transport_ack_timeout": 0, 00:21:52.075 "ctrlr_loss_timeout_sec": 0, 00:21:52.075 "reconnect_delay_sec": 0, 00:21:52.075 "fast_io_fail_timeout_sec": 0, 00:21:52.075 "disable_auto_failback": false, 00:21:52.075 "generate_uuids": false, 00:21:52.075 "transport_tos": 0, 00:21:52.075 "nvme_error_stat": false, 00:21:52.075 "rdma_srq_size": 0, 00:21:52.075 "io_path_stat": false, 00:21:52.075 "allow_accel_sequence": false, 00:21:52.075 "rdma_max_cq_size": 0, 00:21:52.075 "rdma_cm_event_timeout_ms": 0, 00:21:52.075 "dhchap_digests": [ 00:21:52.075 "sha256", 00:21:52.075 "sha384", 00:21:52.075 "sha512" 00:21:52.075 ], 00:21:52.075 "dhchap_dhgroups": [ 00:21:52.075 "null", 00:21:52.075 "ffdhe2048", 00:21:52.075 "ffdhe3072", 00:21:52.075 "ffdhe4096", 00:21:52.075 "ffdhe6144", 00:21:52.075 "ffdhe8192" 00:21:52.075 ] 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_nvme_set_hotplug", 00:21:52.075 "params": { 00:21:52.075 "period_us": 100000, 00:21:52.075 "enable": false 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_malloc_create", 00:21:52.075 "params": { 00:21:52.075 "name": "malloc0", 00:21:52.075 "num_blocks": 8192, 00:21:52.075 "block_size": 4096, 00:21:52.075 "physical_block_size": 4096, 00:21:52.075 "uuid": "c71ad4ae-a1cd-4dc3-9beb-7cf964a5e5b6", 00:21:52.075 "optimal_io_boundary": 0 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "bdev_wait_for_examine" 00:21:52.075 } 00:21:52.075 ] 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "subsystem": "nbd", 00:21:52.075 "config": [] 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "subsystem": "scheduler", 00:21:52.075 "config": [ 00:21:52.075 { 00:21:52.075 "method": "framework_set_scheduler", 00:21:52.075 "params": { 00:21:52.075 "name": "static" 00:21:52.075 } 00:21:52.075 } 00:21:52.075 ] 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "subsystem": "nvmf", 00:21:52.075 "config": [ 00:21:52.075 { 00:21:52.075 "method": "nvmf_set_config", 00:21:52.075 "params": { 00:21:52.075 "discovery_filter": "match_any", 00:21:52.075 "admin_cmd_passthru": { 00:21:52.075 "identify_ctrlr": false 00:21:52.075 } 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_set_max_subsystems", 00:21:52.075 "params": { 00:21:52.075 "max_subsystems": 1024 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_set_crdt", 00:21:52.075 "params": { 00:21:52.075 "crdt1": 0, 00:21:52.075 "crdt2": 0, 00:21:52.075 "crdt3": 0 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_create_transport", 00:21:52.075 "params": { 00:21:52.075 "trtype": "TCP", 00:21:52.075 "max_queue_depth": 128, 00:21:52.075 "max_io_qpairs_per_ctrlr": 127, 00:21:52.075 "in_capsule_data_size": 4096, 00:21:52.075 "max_io_size": 131072, 00:21:52.075 "io_unit_size": 131072, 00:21:52.075 "max_aq_depth": 128, 00:21:52.075 "num_shared_buffers": 511, 00:21:52.075 "buf_cache_size": 4294967295, 00:21:52.075 "dif_insert_or_strip": false, 00:21:52.075 "zcopy": false, 00:21:52.075 "c2h_success": false, 00:21:52.075 "sock_priority": 0, 00:21:52.075 "abort_timeout_sec": 1, 00:21:52.075 "ack_timeout": 0, 00:21:52.075 "data_wr_pool_size": 0 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_create_subsystem", 00:21:52.075 "params": { 00:21:52.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.075 "allow_any_host": false, 00:21:52.075 "serial_number": "00000000000000000000", 00:21:52.075 "model_number": "SPDK bdev Controller", 00:21:52.075 "max_namespaces": 32, 00:21:52.075 "min_cntlid": 1, 00:21:52.075 "max_cntlid": 65519, 00:21:52.075 "ana_reporting": false 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_subsystem_add_host", 00:21:52.075 "params": { 00:21:52.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.075 "host": "nqn.2016-06.io.spdk:host1", 00:21:52.075 "psk": "key0" 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_subsystem_add_ns", 00:21:52.075 "params": { 00:21:52.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.075 "namespace": { 00:21:52.075 "nsid": 1, 00:21:52.075 "bdev_name": "malloc0", 00:21:52.075 "nguid": "C71AD4AEA1CD4DC39BEB7CF964A5E5B6", 00:21:52.075 "uuid": "c71ad4ae-a1cd-4dc3-9beb-7cf964a5e5b6", 00:21:52.075 "no_auto_visible": false 00:21:52.075 } 00:21:52.075 } 00:21:52.075 }, 00:21:52.075 { 00:21:52.075 "method": "nvmf_subsystem_add_listener", 00:21:52.075 "params": { 00:21:52.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.075 "listen_address": { 00:21:52.075 "trtype": "TCP", 00:21:52.075 "adrfam": "IPv4", 00:21:52.075 "traddr": "10.0.0.2", 00:21:52.075 "trsvcid": "4420" 00:21:52.075 }, 00:21:52.075 "secure_channel": true 00:21:52.075 } 00:21:52.075 } 00:21:52.075 ] 00:21:52.075 } 00:21:52.075 ] 00:21:52.075 }' 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3984680 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3984680 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3984680 ']' 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.075 12:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.075 [2024-07-15 12:51:43.968311] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:52.075 [2024-07-15 12:51:43.968371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.075 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.334 [2024-07-15 12:51:44.052655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.334 [2024-07-15 12:51:44.141068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.334 [2024-07-15 12:51:44.141110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.334 [2024-07-15 12:51:44.141120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.334 [2024-07-15 12:51:44.141129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.334 [2024-07-15 12:51:44.141136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.334 [2024-07-15 12:51:44.141201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.593 [2024-07-15 12:51:44.359052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.593 [2024-07-15 12:51:44.391047] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.593 [2024-07-15 12:51:44.401554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3984955 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3984955 /var/tmp/bdevperf.sock 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3984955 ']' 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.160 12:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:53.160 "subsystems": [ 00:21:53.160 { 00:21:53.160 "subsystem": "keyring", 00:21:53.160 "config": [ 00:21:53.160 { 00:21:53.160 "method": "keyring_file_add_key", 00:21:53.160 "params": { 00:21:53.160 "name": "key0", 00:21:53.160 "path": "/tmp/tmp.ZAw7DNhBVc" 00:21:53.160 } 00:21:53.160 } 00:21:53.160 ] 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "subsystem": "iobuf", 00:21:53.160 "config": [ 00:21:53.160 { 00:21:53.160 "method": "iobuf_set_options", 00:21:53.160 "params": { 00:21:53.160 "small_pool_count": 8192, 00:21:53.160 "large_pool_count": 1024, 00:21:53.160 "small_bufsize": 8192, 00:21:53.160 "large_bufsize": 135168 00:21:53.160 } 00:21:53.160 } 00:21:53.160 ] 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "subsystem": "sock", 00:21:53.160 "config": [ 00:21:53.160 { 00:21:53.160 "method": "sock_set_default_impl", 00:21:53.160 "params": { 00:21:53.160 "impl_name": "posix" 00:21:53.160 } 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "method": "sock_impl_set_options", 00:21:53.160 "params": { 00:21:53.160 "impl_name": "ssl", 00:21:53.160 "recv_buf_size": 4096, 00:21:53.160 "send_buf_size": 4096, 00:21:53.160 "enable_recv_pipe": true, 00:21:53.160 "enable_quickack": false, 00:21:53.160 "enable_placement_id": 0, 00:21:53.160 "enable_zerocopy_send_server": true, 00:21:53.160 "enable_zerocopy_send_client": false, 00:21:53.160 "zerocopy_threshold": 0, 00:21:53.160 "tls_version": 0, 00:21:53.160 "enable_ktls": false 00:21:53.160 } 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "method": "sock_impl_set_options", 00:21:53.160 "params": { 00:21:53.160 "impl_name": "posix", 00:21:53.160 "recv_buf_size": 2097152, 00:21:53.160 "send_buf_size": 2097152, 00:21:53.160 "enable_recv_pipe": true, 00:21:53.160 "enable_quickack": false, 00:21:53.160 "enable_placement_id": 0, 00:21:53.160 "enable_zerocopy_send_server": true, 00:21:53.160 "enable_zerocopy_send_client": false, 00:21:53.160 "zerocopy_threshold": 0, 00:21:53.160 "tls_version": 0, 00:21:53.160 "enable_ktls": false 00:21:53.160 } 00:21:53.160 } 00:21:53.160 ] 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "subsystem": "vmd", 00:21:53.160 "config": [] 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "subsystem": "accel", 00:21:53.160 "config": [ 00:21:53.160 { 00:21:53.160 "method": "accel_set_options", 00:21:53.160 "params": { 00:21:53.160 "small_cache_size": 128, 00:21:53.160 "large_cache_size": 16, 00:21:53.160 "task_count": 2048, 00:21:53.160 "sequence_count": 2048, 00:21:53.160 "buf_count": 2048 00:21:53.160 } 00:21:53.160 } 00:21:53.160 ] 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "subsystem": "bdev", 00:21:53.160 "config": [ 00:21:53.160 { 00:21:53.160 "method": "bdev_set_options", 00:21:53.160 "params": { 00:21:53.160 "bdev_io_pool_size": 65535, 00:21:53.160 "bdev_io_cache_size": 256, 00:21:53.160 "bdev_auto_examine": true, 00:21:53.160 "iobuf_small_cache_size": 128, 00:21:53.160 "iobuf_large_cache_size": 16 00:21:53.160 } 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "method": "bdev_raid_set_options", 00:21:53.160 "params": { 00:21:53.160 "process_window_size_kb": 1024 00:21:53.160 } 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "method": "bdev_iscsi_set_options", 00:21:53.160 "params": { 00:21:53.160 "timeout_sec": 30 00:21:53.160 } 00:21:53.160 }, 00:21:53.160 { 00:21:53.160 "method": "bdev_nvme_set_options", 00:21:53.160 "params": { 00:21:53.160 "action_on_timeout": "none", 00:21:53.160 "timeout_us": 0, 00:21:53.160 "timeout_admin_us": 0, 00:21:53.160 "keep_alive_timeout_ms": 10000, 00:21:53.160 "arbitration_burst": 0, 00:21:53.160 "low_priority_weight": 0, 00:21:53.160 "medium_priority_weight": 0, 00:21:53.160 "high_priority_weight": 0, 00:21:53.160 "nvme_adminq_poll_period_us": 10000, 00:21:53.160 "nvme_ioq_poll_period_us": 0, 00:21:53.160 "io_queue_requests": 512, 00:21:53.160 "delay_cmd_submit": true, 00:21:53.160 "transport_retry_count": 4, 00:21:53.160 "bdev_retry_count": 3, 00:21:53.160 "transport_ack_timeout": 0, 00:21:53.160 "ctrlr_loss_timeout_sec": 0, 00:21:53.161 "reconnect_delay_sec": 0, 00:21:53.161 "fast_io_fail_timeout_sec": 0, 00:21:53.161 "disable_auto_failback": false, 00:21:53.161 "generate_uuids": false, 00:21:53.161 "transport_tos": 0, 00:21:53.161 "nvme_error_stat": false, 00:21:53.161 "rdma_srq_size": 0, 00:21:53.161 "io_path_stat": false, 00:21:53.161 "allow_accel_sequence": false, 00:21:53.161 "rdma_max_cq_size": 0, 00:21:53.161 "rdma_cm_event_timeout_ms": 0, 00:21:53.161 "dhchap_digests": [ 00:21:53.161 "sha256", 00:21:53.161 "sha384", 00:21:53.161 "sha512" 00:21:53.161 ], 00:21:53.161 "dhchap_dhgroups": [ 00:21:53.161 "null", 00:21:53.161 "ffdhe2048", 00:21:53.161 "ffdhe3072", 00:21:53.161 "ffdhe4096", 00:21:53.161 "ffdhe6144", 00:21:53.161 "ffdhe8192" 00:21:53.161 ] 00:21:53.161 } 00:21:53.161 }, 00:21:53.161 { 00:21:53.161 "method": "bdev_nvme_attach_controller", 00:21:53.161 "params": { 00:21:53.161 "name": "nvme0", 00:21:53.161 "trtype": "TCP", 00:21:53.161 "adrfam": "IPv4", 00:21:53.161 "traddr": "10.0.0.2", 00:21:53.161 "trsvcid": "4420", 00:21:53.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.161 "prchk_reftag": false, 00:21:53.161 "prchk_guard": false, 00:21:53.161 "ctrlr_loss_timeout_sec": 0, 00:21:53.161 "reconnect_delay_sec": 0, 00:21:53.161 "fast_io_fail_timeout_sec": 0, 00:21:53.161 "psk": "key0", 00:21:53.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.161 "hdgst": false, 00:21:53.161 "ddgst": false 00:21:53.161 } 00:21:53.161 }, 00:21:53.161 { 00:21:53.161 "method": "bdev_nvme_set_hotplug", 00:21:53.161 "params": { 00:21:53.161 "period_us": 100000, 00:21:53.161 "enable": false 00:21:53.161 } 00:21:53.161 }, 00:21:53.161 { 00:21:53.161 "method": "bdev_enable_histogram", 00:21:53.161 "params": { 00:21:53.161 "name": "nvme0n1", 00:21:53.161 "enable": true 00:21:53.161 } 00:21:53.161 }, 00:21:53.161 { 00:21:53.161 "method": "bdev_wait_for_examine" 00:21:53.161 } 00:21:53.161 ] 00:21:53.161 }, 00:21:53.161 { 00:21:53.161 "subsystem": "nbd", 00:21:53.161 "config": [] 00:21:53.161 } 00:21:53.161 ] 00:21:53.161 }' 00:21:53.161 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.161 12:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 [2024-07-15 12:51:44.989804] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:21:53.161 [2024-07-15 12:51:44.989864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984955 ] 00:21:53.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.161 [2024-07-15 12:51:45.071444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.419 [2024-07-15 12:51:45.172205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.419 [2024-07-15 12:51:45.337407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.353 12:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.353 12:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.353 12:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.353 12:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:54.353 12:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.353 12:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.612 Running I/O for 1 seconds... 00:21:55.547 00:21:55.547 Latency(us) 00:21:55.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.547 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.547 Verification LBA range: start 0x0 length 0x2000 00:21:55.547 nvme0n1 : 1.02 3798.55 14.84 0.00 0.00 33351.98 8162.21 36461.85 00:21:55.547 =================================================================================================================== 00:21:55.547 Total : 3798.55 14.84 0.00 0.00 33351.98 8162.21 36461.85 00:21:55.547 0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:55.547 nvmf_trace.0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3984955 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3984955 ']' 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3984955 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:55.547 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.548 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3984955 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3984955' 00:21:55.805 killing process with pid 3984955 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3984955 00:21:55.805 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.805 00:21:55.805 Latency(us) 00:21:55.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.805 =================================================================================================================== 00:21:55.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3984955 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.805 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.063 rmmod nvme_tcp 00:21:56.063 rmmod nvme_fabrics 00:21:56.063 rmmod nvme_keyring 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3984680 ']' 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3984680 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3984680 ']' 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3984680 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3984680 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3984680' 00:21:56.063 killing process with pid 3984680 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3984680 00:21:56.063 12:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3984680 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.321 12:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.322 12:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.322 12:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.225 12:51:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.225 12:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XLqcQl73TT /tmp/tmp.mNIgDKfxVH /tmp/tmp.ZAw7DNhBVc 00:21:58.225 00:21:58.225 real 1m36.430s 00:21:58.225 user 2m37.076s 00:21:58.225 sys 0m28.069s 00:21:58.225 12:51:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.225 12:51:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.225 ************************************ 00:21:58.225 END TEST nvmf_tls 00:21:58.225 ************************************ 00:21:58.484 12:51:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:58.484 12:51:50 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:58.484 12:51:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.484 12:51:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.484 12:51:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.484 ************************************ 00:21:58.484 START TEST nvmf_fips 00:21:58.484 ************************************ 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:58.484 * Looking for test storage... 00:21:58.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:58.484 12:51:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:58.485 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:58.744 Error setting digest 00:21:58.744 00C2EEC3457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:58.744 00C2EEC3457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.744 12:51:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.314 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.315 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.315 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:22:05.315 00:22:05.315 --- 10.0.0.2 ping statistics --- 00:22:05.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.315 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:05.315 00:22:05.315 --- 10.0.0.1 ping statistics --- 00:22:05.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.315 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3988990 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3988990 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3988990 ']' 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.315 12:51:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.315 [2024-07-15 12:51:56.508721] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:22:05.315 [2024-07-15 12:51:56.508786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.315 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.315 [2024-07-15 12:51:56.595730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.315 [2024-07-15 12:51:56.700487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.315 [2024-07-15 12:51:56.700531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.315 [2024-07-15 12:51:56.700544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.315 [2024-07-15 12:51:56.700555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.315 [2024-07-15 12:51:56.700564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.315 [2024-07-15 12:51:56.700589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.574 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.834 [2024-07-15 12:51:57.693779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.834 [2024-07-15 12:51:57.709749] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.834 [2024-07-15 12:51:57.709964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.834 [2024-07-15 12:51:57.740099] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.834 malloc0 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3989272 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3989272 /var/tmp/bdevperf.sock 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3989272 ']' 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.093 12:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:06.093 [2024-07-15 12:51:57.853300] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:22:06.093 [2024-07-15 12:51:57.853369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989272 ] 00:22:06.093 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.093 [2024-07-15 12:51:57.967411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.351 [2024-07-15 12:51:58.115041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.952 12:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.952 12:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:06.952 12:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:07.237 [2024-07-15 12:51:59.007018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.237 [2024-07-15 12:51:59.007179] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.237 TLSTESTn1 00:22:07.237 12:51:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.497 Running I/O for 10 seconds... 00:22:17.476 00:22:17.476 Latency(us) 00:22:17.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.476 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.476 Verification LBA range: start 0x0 length 0x2000 00:22:17.476 TLSTESTn1 : 10.02 2807.08 10.97 0.00 0.00 45478.59 9889.98 44564.48 00:22:17.476 =================================================================================================================== 00:22:17.476 Total : 2807.08 10.97 0.00 0.00 45478.59 9889.98 44564.48 00:22:17.476 0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:17.476 nvmf_trace.0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3989272 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3989272 ']' 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3989272 00:22:17.476 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3989272 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3989272' 00:22:17.735 killing process with pid 3989272 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3989272 00:22:17.735 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.735 00:22:17.735 Latency(us) 00:22:17.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.735 =================================================================================================================== 00:22:17.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.735 [2024-07-15 12:52:09.463454] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:17.735 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3989272 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:17.994 rmmod nvme_tcp 00:22:17.994 rmmod nvme_fabrics 00:22:17.994 rmmod nvme_keyring 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:17.994 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3988990 ']' 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3988990 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3988990 ']' 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3988990 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3988990 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3988990' 00:22:17.995 killing process with pid 3988990 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3988990 00:22:17.995 [2024-07-15 12:52:09.860851] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:17.995 12:52:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3988990 00:22:18.253 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.253 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.254 12:52:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.789 12:52:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.789 12:52:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:20.789 00:22:20.789 real 0m21.966s 00:22:20.789 user 0m24.983s 00:22:20.789 sys 0m8.638s 00:22:20.789 12:52:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.789 12:52:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:20.789 ************************************ 00:22:20.789 END TEST nvmf_fips 00:22:20.789 ************************************ 00:22:20.789 12:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:20.789 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:20.789 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:20.789 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:20.789 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:20.789 12:52:12 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.789 12:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:26.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:26.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.112 12:52:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:26.113 Found net devices under 0000:af:00.0: cvl_0_0 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:26.113 Found net devices under 0000:af:00.1: cvl_0_1 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:26.113 12:52:17 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:26.113 12:52:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.113 12:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.113 12:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.113 ************************************ 00:22:26.113 START TEST nvmf_perf_adq 00:22:26.113 ************************************ 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:26.113 * Looking for test storage... 00:22:26.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.113 12:52:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:31.388 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:31.388 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.388 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:31.389 Found net devices under 0000:af:00.0: cvl_0_0 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:31.389 Found net devices under 0000:af:00.1: cvl_0_1 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:31.389 12:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:32.762 12:52:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:34.667 12:52:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:39.932 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.932 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:39.933 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:39.933 Found net devices under 0000:af:00.0: cvl_0_0 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:39.933 Found net devices under 0000:af:00.1: cvl_0_1 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:39.933 00:22:39.933 --- 10.0.0.2 ping statistics --- 00:22:39.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.933 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:39.933 00:22:39.933 --- 10.0.0.1 ping statistics --- 00:22:39.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.933 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3999720 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3999720 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3999720 ']' 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.933 12:52:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.933 [2024-07-15 12:52:31.800578] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:22:39.933 [2024-07-15 12:52:31.800634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.933 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.191 [2024-07-15 12:52:31.888984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.191 [2024-07-15 12:52:31.978623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.191 [2024-07-15 12:52:31.978667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.191 [2024-07-15 12:52:31.978677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.191 [2024-07-15 12:52:31.978686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.191 [2024-07-15 12:52:31.978694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.191 [2024-07-15 12:52:31.978748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.191 [2024-07-15 12:52:31.978885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.191 [2024-07-15 12:52:31.978998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.191 [2024-07-15 12:52:31.978998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.127 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.128 [2024-07-15 12:52:32.951263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.128 Malloc1 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.128 12:52:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.128 [2024-07-15 12:52:33.012069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3999886 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:41.128 12:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:41.128 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:43.667 "tick_rate": 2200000000, 00:22:43.667 "poll_groups": [ 00:22:43.667 { 00:22:43.667 "name": "nvmf_tgt_poll_group_000", 00:22:43.667 "admin_qpairs": 1, 00:22:43.667 "io_qpairs": 1, 00:22:43.667 "current_admin_qpairs": 1, 00:22:43.667 "current_io_qpairs": 1, 00:22:43.667 "pending_bdev_io": 0, 00:22:43.667 "completed_nvme_io": 12256, 00:22:43.667 "transports": [ 00:22:43.667 { 00:22:43.667 "trtype": "TCP" 00:22:43.667 } 00:22:43.667 ] 00:22:43.667 }, 00:22:43.667 { 00:22:43.667 "name": "nvmf_tgt_poll_group_001", 00:22:43.667 "admin_qpairs": 0, 00:22:43.667 "io_qpairs": 1, 00:22:43.667 "current_admin_qpairs": 0, 00:22:43.667 "current_io_qpairs": 1, 00:22:43.667 "pending_bdev_io": 0, 00:22:43.667 "completed_nvme_io": 8350, 00:22:43.667 "transports": [ 00:22:43.667 { 00:22:43.667 "trtype": "TCP" 00:22:43.667 } 00:22:43.667 ] 00:22:43.667 }, 00:22:43.667 { 00:22:43.667 "name": "nvmf_tgt_poll_group_002", 00:22:43.667 "admin_qpairs": 0, 00:22:43.667 "io_qpairs": 1, 00:22:43.667 "current_admin_qpairs": 0, 00:22:43.667 "current_io_qpairs": 1, 00:22:43.667 "pending_bdev_io": 0, 00:22:43.667 "completed_nvme_io": 8355, 00:22:43.667 "transports": [ 00:22:43.667 { 00:22:43.667 "trtype": "TCP" 00:22:43.667 } 00:22:43.667 ] 00:22:43.667 }, 00:22:43.667 { 00:22:43.667 "name": "nvmf_tgt_poll_group_003", 00:22:43.667 "admin_qpairs": 0, 00:22:43.667 "io_qpairs": 1, 00:22:43.667 "current_admin_qpairs": 0, 00:22:43.667 "current_io_qpairs": 1, 00:22:43.667 "pending_bdev_io": 0, 00:22:43.667 "completed_nvme_io": 13707, 00:22:43.667 "transports": [ 00:22:43.667 { 00:22:43.667 "trtype": "TCP" 00:22:43.667 } 00:22:43.667 ] 00:22:43.667 } 00:22:43.667 ] 00:22:43.667 }' 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:43.667 12:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3999886 00:22:51.788 Initializing NVMe Controllers 00:22:51.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:51.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:51.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:51.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:51.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:51.788 Initialization complete. Launching workers. 00:22:51.788 ======================================================== 00:22:51.788 Latency(us) 00:22:51.788 Device Information : IOPS MiB/s Average min max 00:22:51.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7169.20 28.00 8927.19 2977.93 12493.60 00:22:51.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4426.40 17.29 14464.45 5302.08 24286.18 00:22:51.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4445.00 17.36 14405.06 5377.00 24621.46 00:22:51.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6479.80 25.31 9887.34 3536.10 44573.92 00:22:51.788 ======================================================== 00:22:51.788 Total : 22520.39 87.97 11373.01 2977.93 44573.92 00:22:51.788 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.788 rmmod nvme_tcp 00:22:51.788 rmmod nvme_fabrics 00:22:51.788 rmmod nvme_keyring 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3999720 ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3999720 ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3999720' 00:22:51.788 killing process with pid 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3999720 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.788 12:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.325 12:52:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.325 12:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:54.325 12:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:55.263 12:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:57.169 12:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:02.476 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:02.476 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:02.476 Found net devices under 0000:af:00.0: cvl_0_0 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:02.476 Found net devices under 0000:af:00.1: cvl_0_1 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.476 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:23:02.477 00:23:02.477 --- 10.0.0.2 ping statistics --- 00:23:02.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.477 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:23:02.477 00:23:02.477 --- 10.0.0.1 ping statistics --- 00:23:02.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.477 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:02.477 net.core.busy_poll = 1 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:02.477 net.core.busy_read = 1 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:02.477 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4004078 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4004078 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4004078 ']' 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.736 12:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.736 [2024-07-15 12:52:54.658011] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:02.736 [2024-07-15 12:52:54.658071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.033 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.033 [2024-07-15 12:52:54.745327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.033 [2024-07-15 12:52:54.833085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.033 [2024-07-15 12:52:54.833131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.033 [2024-07-15 12:52:54.833141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.033 [2024-07-15 12:52:54.833150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.033 [2024-07-15 12:52:54.833158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.033 [2024-07-15 12:52:54.833276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.033 [2024-07-15 12:52:54.833349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.033 [2024-07-15 12:52:54.833437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.033 [2024-07-15 12:52:54.833438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 [2024-07-15 12:52:55.798596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 Malloc1 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.998 [2024-07-15 12:52:55.858340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4004366 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:03.998 12:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:03.998 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.532 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:06.532 12:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.532 12:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.532 12:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.532 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:06.532 "tick_rate": 2200000000, 00:23:06.532 "poll_groups": [ 00:23:06.532 { 00:23:06.532 "name": "nvmf_tgt_poll_group_000", 00:23:06.532 "admin_qpairs": 1, 00:23:06.532 "io_qpairs": 3, 00:23:06.532 "current_admin_qpairs": 1, 00:23:06.532 "current_io_qpairs": 3, 00:23:06.532 "pending_bdev_io": 0, 00:23:06.532 "completed_nvme_io": 17260, 00:23:06.532 "transports": [ 00:23:06.532 { 00:23:06.532 "trtype": "TCP" 00:23:06.532 } 00:23:06.532 ] 00:23:06.532 }, 00:23:06.532 { 00:23:06.532 "name": "nvmf_tgt_poll_group_001", 00:23:06.532 "admin_qpairs": 0, 00:23:06.532 "io_qpairs": 1, 00:23:06.532 "current_admin_qpairs": 0, 00:23:06.532 "current_io_qpairs": 1, 00:23:06.532 "pending_bdev_io": 0, 00:23:06.532 "completed_nvme_io": 10514, 00:23:06.532 "transports": [ 00:23:06.532 { 00:23:06.532 "trtype": "TCP" 00:23:06.532 } 00:23:06.532 ] 00:23:06.532 }, 00:23:06.532 { 00:23:06.532 "name": "nvmf_tgt_poll_group_002", 00:23:06.532 "admin_qpairs": 0, 00:23:06.533 "io_qpairs": 0, 00:23:06.533 "current_admin_qpairs": 0, 00:23:06.533 "current_io_qpairs": 0, 00:23:06.533 "pending_bdev_io": 0, 00:23:06.533 "completed_nvme_io": 0, 00:23:06.533 "transports": [ 00:23:06.533 { 00:23:06.533 "trtype": "TCP" 00:23:06.533 } 00:23:06.533 ] 00:23:06.533 }, 00:23:06.533 { 00:23:06.533 "name": "nvmf_tgt_poll_group_003", 00:23:06.533 "admin_qpairs": 0, 00:23:06.533 "io_qpairs": 0, 00:23:06.533 "current_admin_qpairs": 0, 00:23:06.533 "current_io_qpairs": 0, 00:23:06.533 "pending_bdev_io": 0, 00:23:06.533 "completed_nvme_io": 0, 00:23:06.533 "transports": [ 00:23:06.533 { 00:23:06.533 "trtype": "TCP" 00:23:06.533 } 00:23:06.533 ] 00:23:06.533 } 00:23:06.533 ] 00:23:06.533 }' 00:23:06.533 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:06.533 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:06.533 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:06.533 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:06.533 12:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4004366 00:23:14.652 Initializing NVMe Controllers 00:23:14.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:14.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:14.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:14.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:14.652 Initialization complete. Launching workers. 00:23:14.652 ======================================================== 00:23:14.652 Latency(us) 00:23:14.652 Device Information : IOPS MiB/s Average min max 00:23:14.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5459.10 21.32 11726.89 4826.49 15873.29 00:23:14.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3024.90 11.82 21177.97 2849.43 70314.69 00:23:14.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 2945.40 11.51 21746.81 3306.17 70187.73 00:23:14.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 2939.60 11.48 21791.15 3361.99 74266.60 00:23:14.652 ======================================================== 00:23:14.652 Total : 14369.00 56.13 17829.34 2849.43 74266.60 00:23:14.652 00:23:14.652 12:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:14.652 12:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.652 rmmod nvme_tcp 00:23:14.652 rmmod nvme_fabrics 00:23:14.652 rmmod nvme_keyring 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4004078 ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4004078 ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4004078' 00:23:14.652 killing process with pid 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4004078 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.652 12:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.936 12:53:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.936 12:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:17.936 00:23:17.936 real 0m51.908s 00:23:17.936 user 2m51.006s 00:23:17.936 sys 0m9.483s 00:23:17.936 12:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.936 12:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.936 ************************************ 00:23:17.936 END TEST nvmf_perf_adq 00:23:17.936 ************************************ 00:23:17.936 12:53:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:17.936 12:53:09 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.936 12:53:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:17.936 12:53:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.936 12:53:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.936 ************************************ 00:23:17.936 START TEST nvmf_shutdown 00:23:17.936 ************************************ 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.936 * Looking for test storage... 00:23:17.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.936 ************************************ 00:23:17.936 START TEST nvmf_shutdown_tc1 00:23:17.936 ************************************ 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.936 12:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.500 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:24.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:24.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:24.501 Found net devices under 0000:af:00.0: cvl_0_0 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:24.501 Found net devices under 0000:af:00.1: cvl_0_1 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:24.501 00:23:24.501 --- 10.0.0.2 ping statistics --- 00:23:24.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.501 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:24.501 00:23:24.501 --- 10.0.0.1 ping statistics --- 00:23:24.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.501 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4010033 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4010033 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4010033 ']' 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.501 12:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 [2024-07-15 12:53:15.595184] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:24.501 [2024-07-15 12:53:15.595238] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.501 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.501 [2024-07-15 12:53:15.681967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.501 [2024-07-15 12:53:15.788104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.501 [2024-07-15 12:53:15.788150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.502 [2024-07-15 12:53:15.788163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.502 [2024-07-15 12:53:15.788175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.502 [2024-07-15 12:53:15.788185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.502 [2024-07-15 12:53:15.788312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.502 [2024-07-15 12:53:15.788424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.502 [2024-07-15 12:53:15.788535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.502 [2024-07-15 12:53:15.788537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.761 [2024-07-15 12:53:16.511781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.761 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.762 12:53:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.762 Malloc1 00:23:24.762 [2024-07-15 12:53:16.629533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.762 Malloc2 00:23:25.021 Malloc3 00:23:25.021 Malloc4 00:23:25.021 Malloc5 00:23:25.021 Malloc6 00:23:25.021 Malloc7 00:23:25.021 Malloc8 00:23:25.281 Malloc9 00:23:25.281 Malloc10 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4010344 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4010344 /var/tmp/bdevperf.sock 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4010344 ']' 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.281 { 00:23:25.281 "params": { 00:23:25.281 "name": "Nvme$subsystem", 00:23:25.281 "trtype": "$TEST_TRANSPORT", 00:23:25.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.281 "adrfam": "ipv4", 00:23:25.281 "trsvcid": "$NVMF_PORT", 00:23:25.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.281 "hdgst": ${hdgst:-false}, 00:23:25.281 "ddgst": ${ddgst:-false} 00:23:25.281 }, 00:23:25.281 "method": "bdev_nvme_attach_controller" 00:23:25.281 } 00:23:25.281 EOF 00:23:25.281 )") 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.281 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 [2024-07-15 12:53:17.133657] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:25.282 [2024-07-15 12:53:17.133717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.282 { 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme$subsystem", 00:23:25.282 "trtype": "$TEST_TRANSPORT", 00:23:25.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "$NVMF_PORT", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.282 "hdgst": ${hdgst:-false}, 00:23:25.282 "ddgst": ${ddgst:-false} 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 } 00:23:25.282 EOF 00:23:25.282 )") 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:25.282 12:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme1", 00:23:25.282 "trtype": "tcp", 00:23:25.282 "traddr": "10.0.0.2", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "4420", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.282 "hdgst": false, 00:23:25.282 "ddgst": false 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 },{ 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme2", 00:23:25.282 "trtype": "tcp", 00:23:25.282 "traddr": "10.0.0.2", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "4420", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.282 "hdgst": false, 00:23:25.282 "ddgst": false 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 },{ 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme3", 00:23:25.282 "trtype": "tcp", 00:23:25.282 "traddr": "10.0.0.2", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "4420", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:25.282 "hdgst": false, 00:23:25.282 "ddgst": false 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 },{ 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme4", 00:23:25.282 "trtype": "tcp", 00:23:25.282 "traddr": "10.0.0.2", 00:23:25.282 "adrfam": "ipv4", 00:23:25.282 "trsvcid": "4420", 00:23:25.282 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:25.282 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:25.282 "hdgst": false, 00:23:25.282 "ddgst": false 00:23:25.282 }, 00:23:25.282 "method": "bdev_nvme_attach_controller" 00:23:25.282 },{ 00:23:25.282 "params": { 00:23:25.282 "name": "Nvme5", 00:23:25.282 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 },{ 00:23:25.283 "params": { 00:23:25.283 "name": "Nvme6", 00:23:25.283 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 },{ 00:23:25.283 "params": { 00:23:25.283 "name": "Nvme7", 00:23:25.283 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 },{ 00:23:25.283 "params": { 00:23:25.283 "name": "Nvme8", 00:23:25.283 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 },{ 00:23:25.283 "params": { 00:23:25.283 "name": "Nvme9", 00:23:25.283 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 },{ 00:23:25.283 "params": { 00:23:25.283 "name": "Nvme10", 00:23:25.283 "trtype": "tcp", 00:23:25.283 "traddr": "10.0.0.2", 00:23:25.283 "adrfam": "ipv4", 00:23:25.283 "trsvcid": "4420", 00:23:25.283 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:25.283 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:25.283 "hdgst": false, 00:23:25.283 "ddgst": false 00:23:25.283 }, 00:23:25.283 "method": "bdev_nvme_attach_controller" 00:23:25.283 }' 00:23:25.283 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.283 [2024-07-15 12:53:17.216438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.542 [2024-07-15 12:53:17.302100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4010344 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:27.449 12:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:28.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4010344 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4010033 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.386 "name": "Nvme$subsystem", 00:23:28.386 "trtype": "$TEST_TRANSPORT", 00:23:28.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.386 "adrfam": "ipv4", 00:23:28.386 "trsvcid": "$NVMF_PORT", 00:23:28.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.386 "hdgst": ${hdgst:-false}, 00:23:28.386 "ddgst": ${ddgst:-false} 00:23:28.386 }, 00:23:28.386 "method": "bdev_nvme_attach_controller" 00:23:28.386 } 00:23:28.386 EOF 00:23:28.386 )") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.386 "name": "Nvme$subsystem", 00:23:28.386 "trtype": "$TEST_TRANSPORT", 00:23:28.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.386 "adrfam": "ipv4", 00:23:28.386 "trsvcid": "$NVMF_PORT", 00:23:28.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.386 "hdgst": ${hdgst:-false}, 00:23:28.386 "ddgst": ${ddgst:-false} 00:23:28.386 }, 00:23:28.386 "method": "bdev_nvme_attach_controller" 00:23:28.386 } 00:23:28.386 EOF 00:23:28.386 )") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.386 "name": "Nvme$subsystem", 00:23:28.386 "trtype": "$TEST_TRANSPORT", 00:23:28.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.386 "adrfam": "ipv4", 00:23:28.386 "trsvcid": "$NVMF_PORT", 00:23:28.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.386 "hdgst": ${hdgst:-false}, 00:23:28.386 "ddgst": ${ddgst:-false} 00:23:28.386 }, 00:23:28.386 "method": "bdev_nvme_attach_controller" 00:23:28.386 } 00:23:28.386 EOF 00:23:28.386 )") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.386 "name": "Nvme$subsystem", 00:23:28.386 "trtype": "$TEST_TRANSPORT", 00:23:28.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.386 "adrfam": "ipv4", 00:23:28.386 "trsvcid": "$NVMF_PORT", 00:23:28.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.386 "hdgst": ${hdgst:-false}, 00:23:28.386 "ddgst": ${ddgst:-false} 00:23:28.386 }, 00:23:28.386 "method": "bdev_nvme_attach_controller" 00:23:28.386 } 00:23:28.386 EOF 00:23:28.386 )") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.386 "name": "Nvme$subsystem", 00:23:28.386 "trtype": "$TEST_TRANSPORT", 00:23:28.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.386 "adrfam": "ipv4", 00:23:28.386 "trsvcid": "$NVMF_PORT", 00:23:28.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.386 "hdgst": ${hdgst:-false}, 00:23:28.386 "ddgst": ${ddgst:-false} 00:23:28.386 }, 00:23:28.386 "method": "bdev_nvme_attach_controller" 00:23:28.386 } 00:23:28.386 EOF 00:23:28.386 )") 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.386 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.386 { 00:23:28.386 "params": { 00:23:28.387 "name": "Nvme$subsystem", 00:23:28.387 "trtype": "$TEST_TRANSPORT", 00:23:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "$NVMF_PORT", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.387 "hdgst": ${hdgst:-false}, 00:23:28.387 "ddgst": ${ddgst:-false} 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 } 00:23:28.387 EOF 00:23:28.387 )") 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.387 { 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme$subsystem", 00:23:28.387 "trtype": "$TEST_TRANSPORT", 00:23:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "$NVMF_PORT", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.387 "hdgst": ${hdgst:-false}, 00:23:28.387 "ddgst": ${ddgst:-false} 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 } 00:23:28.387 EOF 00:23:28.387 )") 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.387 { 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme$subsystem", 00:23:28.387 "trtype": "$TEST_TRANSPORT", 00:23:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "$NVMF_PORT", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.387 "hdgst": ${hdgst:-false}, 00:23:28.387 "ddgst": ${ddgst:-false} 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 } 00:23:28.387 EOF 00:23:28.387 )") 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.387 [2024-07-15 12:53:20.088327] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:28.387 [2024-07-15 12:53:20.088392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010893 ] 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.387 { 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme$subsystem", 00:23:28.387 "trtype": "$TEST_TRANSPORT", 00:23:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "$NVMF_PORT", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.387 "hdgst": ${hdgst:-false}, 00:23:28.387 "ddgst": ${ddgst:-false} 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 } 00:23:28.387 EOF 00:23:28.387 )") 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.387 { 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme$subsystem", 00:23:28.387 "trtype": "$TEST_TRANSPORT", 00:23:28.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "$NVMF_PORT", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.387 "hdgst": ${hdgst:-false}, 00:23:28.387 "ddgst": ${ddgst:-false} 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 } 00:23:28.387 EOF 00:23:28.387 )") 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:28.387 12:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme1", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme2", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme3", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme4", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme5", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme6", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme7", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme8", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme9", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 },{ 00:23:28.387 "params": { 00:23:28.387 "name": "Nvme10", 00:23:28.387 "trtype": "tcp", 00:23:28.387 "traddr": "10.0.0.2", 00:23:28.387 "adrfam": "ipv4", 00:23:28.387 "trsvcid": "4420", 00:23:28.387 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.387 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.387 "hdgst": false, 00:23:28.387 "ddgst": false 00:23:28.387 }, 00:23:28.387 "method": "bdev_nvme_attach_controller" 00:23:28.387 }' 00:23:28.387 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.387 [2024-07-15 12:53:20.173514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.387 [2024-07-15 12:53:20.262005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.778 Running I/O for 1 seconds... 00:23:31.157 00:23:31.157 Latency(us) 00:23:31.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme1n1 : 1.15 166.89 10.43 0.00 0.00 378551.08 48377.48 291694.78 00:23:31.157 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme2n1 : 1.10 174.75 10.92 0.00 0.00 353508.38 55050.24 266910.25 00:23:31.157 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme3n1 : 1.05 182.49 11.41 0.00 0.00 330066.85 30742.34 314572.80 00:23:31.157 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme4n1 : 1.11 249.05 15.57 0.00 0.00 233660.07 5004.57 287881.77 00:23:31.157 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme5n1 : 1.15 176.73 11.05 0.00 0.00 315670.91 11081.54 278349.27 00:23:31.157 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme6n1 : 1.19 164.71 10.29 0.00 0.00 345075.16 3306.59 341263.83 00:23:31.157 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme7n1 : 1.20 213.28 13.33 0.00 0.00 260728.09 30980.65 276442.76 00:23:31.157 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme8n1 : 1.21 268.52 16.78 0.00 0.00 202703.04 2159.71 293601.28 00:23:31.157 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme9n1 : 1.20 160.16 10.01 0.00 0.00 331517.83 53858.68 350796.33 00:23:31.157 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.157 Verification LBA range: start 0x0 length 0x400 00:23:31.157 Nvme10n1 : 1.25 231.31 14.46 0.00 0.00 224481.05 3604.48 373674.36 00:23:31.157 =================================================================================================================== 00:23:31.157 Total : 1987.90 124.24 0.00 0.00 285974.99 2159.71 373674.36 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.157 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.157 rmmod nvme_tcp 00:23:31.157 rmmod nvme_fabrics 00:23:31.157 rmmod nvme_keyring 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4010033 ']' 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4010033 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 4010033 ']' 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 4010033 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4010033 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4010033' 00:23:31.417 killing process with pid 4010033 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 4010033 00:23:31.417 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 4010033 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.985 12:53:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.541 00:23:34.541 real 0m16.224s 00:23:34.541 user 0m37.711s 00:23:34.541 sys 0m5.822s 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.541 ************************************ 00:23:34.541 END TEST nvmf_shutdown_tc1 00:23:34.541 ************************************ 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.541 ************************************ 00:23:34.541 START TEST nvmf_shutdown_tc2 00:23:34.541 ************************************ 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.541 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:34.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:34.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:34.542 Found net devices under 0000:af:00.0: cvl_0_0 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:34.542 Found net devices under 0000:af:00.1: cvl_0_1 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.542 12:53:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:23:34.542 00:23:34.542 --- 10.0.0.2 ping statistics --- 00:23:34.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.542 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:23:34.542 00:23:34.542 --- 10.0.0.1 ping statistics --- 00:23:34.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.542 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:34.542 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4012044 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4012044 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4012044 ']' 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.543 12:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 [2024-07-15 12:53:26.321069] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:34.543 [2024-07-15 12:53:26.321123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.543 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.543 [2024-07-15 12:53:26.408603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.800 [2024-07-15 12:53:26.515376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.800 [2024-07-15 12:53:26.515425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.800 [2024-07-15 12:53:26.515438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.800 [2024-07-15 12:53:26.515449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.800 [2024-07-15 12:53:26.515458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.800 [2024-07-15 12:53:26.515527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.800 [2024-07-15 12:53:26.515659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.800 [2024-07-15 12:53:26.515793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.800 [2024-07-15 12:53:26.515795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.368 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.625 [2024-07-15 12:53:27.310069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.625 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.625 Malloc1 00:23:35.625 [2024-07-15 12:53:27.420248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.625 Malloc2 00:23:35.625 Malloc3 00:23:35.625 Malloc4 00:23:35.883 Malloc5 00:23:35.883 Malloc6 00:23:35.883 Malloc7 00:23:35.883 Malloc8 00:23:35.883 Malloc9 00:23:35.883 Malloc10 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4012364 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4012364 /var/tmp/bdevperf.sock 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4012364 ']' 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.142 { 00:23:36.142 "params": { 00:23:36.142 "name": "Nvme$subsystem", 00:23:36.142 "trtype": "$TEST_TRANSPORT", 00:23:36.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.142 "adrfam": "ipv4", 00:23:36.142 "trsvcid": "$NVMF_PORT", 00:23:36.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.142 "hdgst": ${hdgst:-false}, 00:23:36.142 "ddgst": ${ddgst:-false} 00:23:36.142 }, 00:23:36.142 "method": "bdev_nvme_attach_controller" 00:23:36.142 } 00:23:36.142 EOF 00:23:36.142 )") 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.142 { 00:23:36.142 "params": { 00:23:36.142 "name": "Nvme$subsystem", 00:23:36.142 "trtype": "$TEST_TRANSPORT", 00:23:36.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.142 "adrfam": "ipv4", 00:23:36.142 "trsvcid": "$NVMF_PORT", 00:23:36.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.142 "hdgst": ${hdgst:-false}, 00:23:36.142 "ddgst": ${ddgst:-false} 00:23:36.142 }, 00:23:36.142 "method": "bdev_nvme_attach_controller" 00:23:36.142 } 00:23:36.142 EOF 00:23:36.142 )") 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.142 { 00:23:36.142 "params": { 00:23:36.142 "name": "Nvme$subsystem", 00:23:36.142 "trtype": "$TEST_TRANSPORT", 00:23:36.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.142 "adrfam": "ipv4", 00:23:36.142 "trsvcid": "$NVMF_PORT", 00:23:36.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.142 "hdgst": ${hdgst:-false}, 00:23:36.142 "ddgst": ${ddgst:-false} 00:23:36.142 }, 00:23:36.142 "method": "bdev_nvme_attach_controller" 00:23:36.142 } 00:23:36.142 EOF 00:23:36.142 )") 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.142 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 [2024-07-15 12:53:27.916757] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 [2024-07-15 12:53:27.916822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012364 ] 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.143 { 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme$subsystem", 00:23:36.143 "trtype": "$TEST_TRANSPORT", 00:23:36.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "$NVMF_PORT", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.143 "hdgst": ${hdgst:-false}, 00:23:36.143 "ddgst": ${ddgst:-false} 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 } 00:23:36.143 EOF 00:23:36.143 )") 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:36.143 12:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme1", 00:23:36.143 "trtype": "tcp", 00:23:36.143 "traddr": "10.0.0.2", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "4420", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.143 "hdgst": false, 00:23:36.143 "ddgst": false 00:23:36.143 }, 00:23:36.143 "method": "bdev_nvme_attach_controller" 00:23:36.143 },{ 00:23:36.143 "params": { 00:23:36.143 "name": "Nvme2", 00:23:36.143 "trtype": "tcp", 00:23:36.143 "traddr": "10.0.0.2", 00:23:36.143 "adrfam": "ipv4", 00:23:36.143 "trsvcid": "4420", 00:23:36.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.143 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.143 "hdgst": false, 00:23:36.143 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme3", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme4", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme5", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme6", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme7", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme8", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme9", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 },{ 00:23:36.144 "params": { 00:23:36.144 "name": "Nvme10", 00:23:36.144 "trtype": "tcp", 00:23:36.144 "traddr": "10.0.0.2", 00:23:36.144 "adrfam": "ipv4", 00:23:36.144 "trsvcid": "4420", 00:23:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.144 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.144 "hdgst": false, 00:23:36.144 "ddgst": false 00:23:36.144 }, 00:23:36.144 "method": "bdev_nvme_attach_controller" 00:23:36.144 }' 00:23:36.144 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.144 [2024-07-15 12:53:27.999911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.402 [2024-07-15 12:53:28.085054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.777 Running I/O for 10 seconds... 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:38.036 12:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.294 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.552 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.552 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:38.553 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:38.553 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.811 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4012364 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4012364 ']' 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4012364 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4012364 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4012364' 00:23:38.812 killing process with pid 4012364 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4012364 00:23:38.812 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4012364 00:23:39.090 Received shutdown signal, test time was about 1.113247 seconds 00:23:39.090 00:23:39.090 Latency(us) 00:23:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.090 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme1n1 : 1.10 174.96 10.94 0.00 0.00 360637.28 49330.73 346983.33 00:23:39.090 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme2n1 : 1.08 177.33 11.08 0.00 0.00 348539.19 50283.99 327918.31 00:23:39.090 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme3n1 : 1.05 183.19 11.45 0.00 0.00 328791.51 30146.56 331731.32 00:23:39.090 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme4n1 : 1.06 241.88 15.12 0.00 0.00 243392.70 30742.34 280255.77 00:23:39.090 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme5n1 : 1.10 175.14 10.95 0.00 0.00 328996.00 27644.28 367954.85 00:23:39.090 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme6n1 : 1.08 178.03 11.13 0.00 0.00 314914.91 14954.12 400365.38 00:23:39.090 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme7n1 : 1.06 181.21 11.33 0.00 0.00 300449.05 30980.65 293601.28 00:23:39.090 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme8n1 : 1.08 235.94 14.75 0.00 0.00 225411.82 12988.04 287881.77 00:23:39.090 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme9n1 : 1.11 172.62 10.79 0.00 0.00 303283.98 8519.68 322198.81 00:23:39.090 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.090 Verification LBA range: start 0x0 length 0x400 00:23:39.090 Nvme10n1 : 1.09 176.46 11.03 0.00 0.00 287456.66 50045.67 331731.32 00:23:39.090 =================================================================================================================== 00:23:39.090 Total : 1896.76 118.55 0.00 0.00 299898.48 8519.68 400365.38 00:23:39.090 12:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4012044 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:40.468 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.469 12:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.469 rmmod nvme_tcp 00:23:40.469 rmmod nvme_fabrics 00:23:40.469 rmmod nvme_keyring 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4012044 ']' 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4012044 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4012044 ']' 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4012044 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4012044 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4012044' 00:23:40.469 killing process with pid 4012044 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4012044 00:23:40.469 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4012044 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.728 12:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.262 00:23:43.262 real 0m8.720s 00:23:43.262 user 0m27.236s 00:23:43.262 sys 0m1.491s 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.262 ************************************ 00:23:43.262 END TEST nvmf_shutdown_tc2 00:23:43.262 ************************************ 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.262 ************************************ 00:23:43.262 START TEST nvmf_shutdown_tc3 00:23:43.262 ************************************ 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:43.262 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:43.262 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:43.262 Found net devices under 0000:af:00.0: cvl_0_0 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:43.262 Found net devices under 0000:af:00.1: cvl_0_1 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.262 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.263 12:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:23:43.263 00:23:43.263 --- 10.0.0.2 ping statistics --- 00:23:43.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.263 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:43.263 00:23:43.263 --- 10.0.0.1 ping statistics --- 00:23:43.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.263 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4013785 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4013785 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4013785 ']' 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.263 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.263 [2024-07-15 12:53:35.121443] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:43.263 [2024-07-15 12:53:35.121503] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.263 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.522 [2024-07-15 12:53:35.211766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.522 [2024-07-15 12:53:35.319833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.522 [2024-07-15 12:53:35.319879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.522 [2024-07-15 12:53:35.319893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.522 [2024-07-15 12:53:35.319904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.522 [2024-07-15 12:53:35.319913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.522 [2024-07-15 12:53:35.320035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.522 [2024-07-15 12:53:35.320071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.522 [2024-07-15 12:53:35.320161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.522 [2024-07-15 12:53:35.320162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.522 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.522 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:43.522 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.522 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.522 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 [2024-07-15 12:53:35.501151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 12:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 Malloc1 00:23:43.781 [2024-07-15 12:53:35.607106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.781 Malloc2 00:23:43.781 Malloc3 00:23:43.781 Malloc4 00:23:44.040 Malloc5 00:23:44.040 Malloc6 00:23:44.040 Malloc7 00:23:44.040 Malloc8 00:23:44.040 Malloc9 00:23:44.300 Malloc10 00:23:44.300 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.300 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:44.300 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.300 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4013863 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4013863 /var/tmp/bdevperf.sock 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4013863 ']' 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 [2024-07-15 12:53:36.098548] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:44.301 [2024-07-15 12:53:36.098610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013863 ] 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.301 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.301 { 00:23:44.301 "params": { 00:23:44.301 "name": "Nvme$subsystem", 00:23:44.301 "trtype": "$TEST_TRANSPORT", 00:23:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.301 "adrfam": "ipv4", 00:23:44.301 "trsvcid": "$NVMF_PORT", 00:23:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.301 "hdgst": ${hdgst:-false}, 00:23:44.301 "ddgst": ${ddgst:-false} 00:23:44.301 }, 00:23:44.301 "method": "bdev_nvme_attach_controller" 00:23:44.301 } 00:23:44.301 EOF 00:23:44.301 )") 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.302 { 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme$subsystem", 00:23:44.302 "trtype": "$TEST_TRANSPORT", 00:23:44.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "$NVMF_PORT", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.302 "hdgst": ${hdgst:-false}, 00:23:44.302 "ddgst": ${ddgst:-false} 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 } 00:23:44.302 EOF 00:23:44.302 )") 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:44.302 12:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme1", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme2", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme3", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme4", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme5", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme6", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme7", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme8", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme9", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 },{ 00:23:44.302 "params": { 00:23:44.302 "name": "Nvme10", 00:23:44.302 "trtype": "tcp", 00:23:44.302 "traddr": "10.0.0.2", 00:23:44.302 "adrfam": "ipv4", 00:23:44.302 "trsvcid": "4420", 00:23:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.302 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.302 "hdgst": false, 00:23:44.302 "ddgst": false 00:23:44.302 }, 00:23:44.302 "method": "bdev_nvme_attach_controller" 00:23:44.302 }' 00:23:44.302 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.302 [2024-07-15 12:53:36.182092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.562 [2024-07-15 12:53:36.274285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.941 Running I/O for 10 seconds... 00:23:45.941 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.941 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:45.941 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:45.941 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.941 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.200 12:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.200 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:46.200 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:46.200 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:46.459 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4013785 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 4013785 ']' 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 4013785 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.717 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4013785 00:23:46.995 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.995 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.995 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4013785' 00:23:46.995 killing process with pid 4013785 00:23:46.995 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 4013785 00:23:46.995 12:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 4013785 00:23:46.995 [2024-07-15 12:53:38.679064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.995 [2024-07-15 12:53:38.679817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.679991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.680420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b31b0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.683558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5bb0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.683630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5bb0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.683655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5bb0 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.686993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:1[2024-07-15 12:53:38.687007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.996 the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.687031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with [2024-07-15 12:53:38.687034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:46.996 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.996 [2024-07-15 12:53:38.687053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.687060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.996 [2024-07-15 12:53:38.687073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 12:53:38.687073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.996 the state(5) to be set 00:23:46.996 [2024-07-15 12:53:38.687094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with [2024-07-15 12:53:38.687145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:1the state(5) to be set 00:23:46.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with [2024-07-15 12:53:38.687247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:1the state(5) to be set 00:23:46.997 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with the state(5) to be set 00:23:46.997 [2024-07-15 12:53:38.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3650 is same with [2024-07-15 12:53:38.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:46.997 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.997 [2024-07-15 12:53:38.687785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.997 [2024-07-15 12:53:38.687796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.687985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.998 [2024-07-15 12:53:38.688467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.688504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:46.998 [2024-07-15 12:53:38.688924] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc82110 was disconnected and freed. reset controller. 00:23:46.998 [2024-07-15 12:53:38.689029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.998 [2024-07-15 12:53:38.689044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.689056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.998 [2024-07-15 12:53:38.689065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.689076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.998 [2024-07-15 12:53:38.689086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.689097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.998 [2024-07-15 12:53:38.689106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.998 [2024-07-15 12:53:38.689116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.689176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.999 [2024-07-15 12:53:38.689189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.999 [2024-07-15 12:53:38.689199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.999 [2024-07-15 12:53:38.689208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.999 [2024-07-15 12:53:38.689223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.999 [2024-07-15 12:53:38.689232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.999 [2024-07-15 12:53:38.689242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.999 [2024-07-15 12:53:38.689252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.999 [2024-07-15 12:53:38.689270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb6120 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.691905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.691973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.691996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.692991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.693154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3af0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.694261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:46.999 [2024-07-15 12:53:38.694300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:46.999 [2024-07-15 12:53:38.696327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.696373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:46.999 [2024-07-15 12:53:38.696387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.696989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.697001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.000 [2024-07-15 12:53:38.697002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.000 [2024-07-15 12:53:38.697012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66020 with addr=10.0.0.2, port=4420 00:23:47.001 [2024-07-15 12:53:38.697037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3fb0 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.697954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:47.001 [2024-07-15 12:53:38.698024] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.001 [2024-07-15 12:53:38.698080] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.001 [2024-07-15 12:53:38.698443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error[2024-07-15 12:53:38.698595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with state 00:23:47.001 the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:47.001 [2024-07-15 12:53:38.698617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:47.001 [2024-07-15 12:53:38.698638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.698983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.001 [2024-07-15 12:53:38.699057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with [2024-07-15 12:53:38.699100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:47.001 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 12:53:38.699145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with [2024-07-15 12:53:38.699173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:47.001 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81d10 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with [2024-07-15 12:53:38.699233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:47.001 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.001 [2024-07-15 12:53:38.699301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.001 [2024-07-15 12:53:38.699325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe6630 is same w[2024-07-15 12:53:38.699323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with ith the state(5) to be set 00:23:47.001 the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.001 [2024-07-15 12:53:38.699398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.002 [2024-07-15 12:53:38.699401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.699423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.002 [2024-07-15 12:53:38.699422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.699443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.002 [2024-07-15 12:53:38.699461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.699464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with [2024-07-15 12:53:38.699473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:23:47.002 id:0 cdw10:00000000 cdw11:00000000 00:23:47.002 [2024-07-15 12:53:38.699488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.699490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd899c0 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6120 (9): Bad file descriptor 00:23:47.002 [2024-07-15 12:53:38.699530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699615] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.002 [2024-07-15 12:53:38.699626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.699721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4450 is same with the state(5) to be set 00:23:47.002 [2024-07-15 12:53:38.700420] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.002 [2024-07-15 12:53:38.701921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.701945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.701962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.701973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.701985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.002 [2024-07-15 12:53:38.702437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.002 [2024-07-15 12:53:38.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.702986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.702996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.003 [2024-07-15 12:53:38.703136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.003 [2024-07-15 12:53:38.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.703349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.703360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0a40 is same with the state(5) to be set 00:23:47.004 [2024-07-15 12:53:38.703419] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbb0a40 was disconnected and freed. reset controller. 00:23:47.004 [2024-07-15 12:53:38.706076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:47.004 [2024-07-15 12:53:38.706136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7de0 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.707330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:47.004 [2024-07-15 12:53:38.707598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.004 [2024-07-15 12:53:38.707618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd7de0 with addr=10.0.0.2, port=4420 00:23:47.004 [2024-07-15 12:53:38.707633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7de0 is same with the state(5) to be set 00:23:47.004 [2024-07-15 12:53:38.708217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.004 [2024-07-15 12:53:38.708241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66020 with addr=10.0.0.2, port=4420 00:23:47.004 [2024-07-15 12:53:38.708251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:47.004 [2024-07-15 12:53:38.708273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7de0 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.708630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.708651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:47.004 [2024-07-15 12:53:38.708661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:47.004 [2024-07-15 12:53:38.708671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:47.004 [2024-07-15 12:53:38.708957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.004 [2024-07-15 12:53:38.708975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:47.004 [2024-07-15 12:53:38.708984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:47.004 [2024-07-15 12:53:38.708994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:47.004 [2024-07-15 12:53:38.709228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.004 [2024-07-15 12:53:38.709249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd81d10 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.709277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6630 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.709325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd899c0 (9): Bad file descriptor 00:23:47.004 [2024-07-15 12:53:38.709648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.004 [2024-07-15 12:53:38.709981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.004 [2024-07-15 12:53:38.709992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.005 [2024-07-15 12:53:38.710602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.005 [2024-07-15 12:53:38.710612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.710985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.710994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.711006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.711015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.711027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.006 [2024-07-15 12:53:38.711037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.006 [2024-07-15 12:53:38.711048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd24010 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.006 [2024-07-15 12:53:38.713911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.713930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.713948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.713967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.713985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4910 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.714713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.007 [2024-07-15 12:53:38.715337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.007 [2024-07-15 12:53:38.715364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb6120 with addr=10.0.0.2, port=4420 00:23:47.007 [2024-07-15 12:53:38.715375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb6120 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.716156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6120 (9): Bad file descriptor 00:23:47.007 [2024-07-15 12:53:38.716370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.007 [2024-07-15 12:53:38.716387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.007 [2024-07-15 12:53:38.716397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.007 [2024-07-15 12:53:38.716502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.007 [2024-07-15 12:53:38.716648] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.007 [2024-07-15 12:53:38.716755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:47.007 [2024-07-15 12:53:38.717040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.007 [2024-07-15 12:53:38.717059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd7de0 with addr=10.0.0.2, port=4420 00:23:47.007 [2024-07-15 12:53:38.717069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7de0 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.717173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7de0 (9): Bad file descriptor 00:23:47.007 [2024-07-15 12:53:38.717277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:47.007 [2024-07-15 12:53:38.717291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:47.007 [2024-07-15 12:53:38.717300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:47.007 [2024-07-15 12:53:38.717393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.007 [2024-07-15 12:53:38.717560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:47.007 [2024-07-15 12:53:38.717869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.007 [2024-07-15 12:53:38.717889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66020 with addr=10.0.0.2, port=4420 00:23:47.007 [2024-07-15 12:53:38.717899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.717948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:47.007 [2024-07-15 12:53:38.717988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:47.007 [2024-07-15 12:53:38.717998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:47.007 [2024-07-15 12:53:38.718008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:47.007 [2024-07-15 12:53:38.718050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.007 [2024-07-15 12:53:38.719333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.007 [2024-07-15 12:53:38.719353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.007 [2024-07-15 12:53:38.719374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.007 [2024-07-15 12:53:38.719394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.007 [2024-07-15 12:53:38.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b7610 is same with the state(5) to be set 00:23:47.007 [2024-07-15 12:53:38.719589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.007 [2024-07-15 12:53:38.719759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.007 [2024-07-15 12:53:38.719771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.719996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-15 12:53:38.720578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.008 [2024-07-15 12:53:38.720590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.720969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a4f0 is same with the state(5) to be set 00:23:47.009 [2024-07-15 12:53:38.722444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-15 12:53:38.722832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.009 [2024-07-15 12:53:38.722844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.722983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.722994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.010 [2024-07-15 12:53:38.723414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.010 [2024-07-15 12:53:38.723426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.723820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.723830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25750 is same with the state(5) to be set 00:23:47.011 [2024-07-15 12:53:38.725304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.011 [2024-07-15 12:53:38.725679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.011 [2024-07-15 12:53:38.725688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.725985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.725997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.012 [2024-07-15 12:53:38.726459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-15 12:53:38.726469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-15 12:53:38.726684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.726694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf5b0 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.728285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:47.013 [2024-07-15 12:53:38.728309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:47.013 [2024-07-15 12:53:38.728322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:47.013 [2024-07-15 12:53:38.728818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.728841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd81d10 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.728852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81d10 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.729046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.729059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd899c0 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.729069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd899c0 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.729285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.729298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe6630 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.729309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe6630 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.730360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.013 [2024-07-15 12:53:38.730380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:47.013 [2024-07-15 12:53:38.730396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:47.013 [2024-07-15 12:53:38.730430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd81d10 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.730444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd899c0 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.730456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6630 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.730510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.013 [2024-07-15 12:53:38.730524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.730535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.013 [2024-07-15 12:53:38.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.730555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.013 [2024-07-15 12:53:38.730565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.730576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.013 [2024-07-15 12:53:38.730585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.013 [2024-07-15 12:53:38.730594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77250 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.730615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b7610 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.730890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.730911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb6120 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.730920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb6120 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.731086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.731099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd7de0 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.731109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7de0 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.731266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.013 [2024-07-15 12:53:38.731279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66020 with addr=10.0.0.2, port=4420 00:23:47.013 [2024-07-15 12:53:38.731288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:47.013 [2024-07-15 12:53:38.731298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:47.013 [2024-07-15 12:53:38.731315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:47.013 [2024-07-15 12:53:38.731330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:47.013 [2024-07-15 12:53:38.731351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:47.013 [2024-07-15 12:53:38.731364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:47.013 [2024-07-15 12:53:38.731381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:47.013 [2024-07-15 12:53:38.731532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.013 [2024-07-15 12:53:38.731545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.013 [2024-07-15 12:53:38.731553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.013 [2024-07-15 12:53:38.731564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6120 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.731576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7de0 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.731588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:47.013 [2024-07-15 12:53:38.731701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.013 [2024-07-15 12:53:38.731721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.013 [2024-07-15 12:53:38.731734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:47.013 [2024-07-15 12:53:38.731751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:47.013 [2024-07-15 12:53:38.731764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:47.013 [2024-07-15 12:53:38.731772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:47.014 [2024-07-15 12:53:38.731781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:47.014 [2024-07-15 12:53:38.731910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.014 [2024-07-15 12:53:38.731922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.014 [2024-07-15 12:53:38.731931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.014 [2024-07-15 12:53:38.732467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.732988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4db0 is same with the state(5) to be set 00:23:47.014 [2024-07-15 12:53:38.733727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.014 [2024-07-15 12:53:38.733881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.014 [2024-07-15 12:53:38.733890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.733902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.733912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.733924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.733934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.733946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.733976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.733988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.015 [2024-07-15 12:53:38.734469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.015 [2024-07-15 12:53:38.734479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.734984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.734993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.735005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.735014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.735026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.016 [2024-07-15 12:53:38.735036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.016 [2024-07-15 12:53:38.735046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7e430 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735101] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7e430 was disconnected and freed. reset controller. 00:23:47.016 [2024-07-15 12:53:38.735374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.016 [2024-07-15 12:53:38.735778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.735984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:47.017 [2024-07-15 12:53:38.736597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b5710 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.736619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77250 (9): Bad file descriptor 00:23:47.017 [2024-07-15 12:53:38.737274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.017 [2024-07-15 12:53:38.737297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd77250 with addr=10.0.0.2, port=4420 00:23:47.017 [2024-07-15 12:53:38.737307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77250 is same with the state(5) to be set 00:23:47.017 [2024-07-15 12:53:38.737396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77250 (9): Bad file descriptor 00:23:47.017 [2024-07-15 12:53:38.737478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:47.017 [2024-07-15 12:53:38.737490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:47.017 [2024-07-15 12:53:38.737498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:47.017 [2024-07-15 12:53:38.737585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.017 [2024-07-15 12:53:38.737860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.737987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.737999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.738008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.738020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.738029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.738041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.738055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.738066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.738075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.017 [2024-07-15 12:53:38.738087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.017 [2024-07-15 12:53:38.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.018 [2024-07-15 12:53:38.738828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.018 [2024-07-15 12:53:38.738840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.738978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.738991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.739236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.739246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80c10 is same with the state(5) to be set 00:23:47.019 [2024-07-15 12:53:38.739306] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc80c10 was disconnected and freed. reset controller. 00:23:47.019 [2024-07-15 12:53:38.740732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:47.019 [2024-07-15 12:53:38.740779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd59bc0 (9): Bad file descriptor 00:23:47.019 [2024-07-15 12:53:38.740808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.019 [2024-07-15 12:53:38.740820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.740831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.019 [2024-07-15 12:53:38.740841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.740851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.019 [2024-07-15 12:53:38.740860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.740871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.019 [2024-07-15 12:53:38.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.740889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7eac0 is same with the state(5) to be set 00:23:47.019 [2024-07-15 12:53:38.740966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:47.019 [2024-07-15 12:53:38.740981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:47.019 [2024-07-15 12:53:38.741000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:47.019 [2024-07-15 12:53:38.741093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.019 [2024-07-15 12:53:38.741304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.019 [2024-07-15 12:53:38.741313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 12:53:38.741883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 12:53:38.741892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.741904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.741915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.741936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.741948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.741958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.741969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.741979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.741991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 12:53:38.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 12:53:38.742494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1ed0 is same with the state(5) to be set 00:23:47.021 [2024-07-15 12:53:38.744494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:47.021 [2024-07-15 12:53:38.744515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:47.021 [2024-07-15 12:53:38.744528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.021 task offset: 18304 on job bdev=Nvme10n1 fails 00:23:47.021 00:23:47.021 Latency(us) 00:23:47.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.021 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme1n1 ended in about 0.99 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme1n1 : 0.99 133.53 8.35 64.74 0.00 318493.11 29312.47 301227.29 00:23:47.021 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme2n1 ended in about 1.00 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme2n1 : 1.00 128.20 8.01 64.10 0.00 320465.45 51713.86 278349.27 00:23:47.021 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme3n1 ended in about 1.00 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme3n1 : 1.00 127.84 7.99 63.92 0.00 313606.21 29193.31 337450.82 00:23:47.021 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme4n1 ended in about 1.00 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme4n1 : 1.00 127.48 7.97 63.74 0.00 306530.06 25618.62 303133.79 00:23:47.021 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme5n1 : 0.98 130.40 8.15 65.20 0.00 291228.24 14834.97 297414.28 00:23:47.021 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme6n1 ended in about 1.02 seconds with error 00:23:47.021 Verification LBA range: start 0x0 length 0x400 00:23:47.021 Nvme6n1 : 1.02 125.50 7.84 62.75 0.00 296029.09 32648.84 276442.76 00:23:47.021 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.021 Job: Nvme7n1 ended in about 1.01 seconds with error 00:23:47.022 Verification LBA range: start 0x0 length 0x400 00:23:47.022 Nvme7n1 : 1.01 130.36 8.15 59.26 0.00 285605.31 7298.33 348889.83 00:23:47.022 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.022 Verification LBA range: start 0x0 length 0x400 00:23:47.022 Nvme8n1 : 0.99 193.81 12.11 0.00 0.00 270467.57 28359.21 312666.30 00:23:47.022 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.022 Job: Nvme9n1 ended in about 1.02 seconds with error 00:23:47.022 Verification LBA range: start 0x0 length 0x400 00:23:47.022 Nvme9n1 : 1.02 125.89 7.87 62.95 0.00 271327.42 20375.74 285975.27 00:23:47.022 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.022 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:47.022 Verification LBA range: start 0x0 length 0x400 00:23:47.022 Nvme10n1 : 0.97 136.03 8.50 65.95 0.00 242923.96 6315.29 295507.78 00:23:47.022 =================================================================================================================== 00:23:47.022 Total : 1359.04 84.94 572.61 0.00 291622.17 6315.29 348889.83 00:23:47.022 [2024-07-15 12:53:38.773553] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:47.022 [2024-07-15 12:53:38.773599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:47.022 [2024-07-15 12:53:38.773851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.773872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd59bc0 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.773885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd59bc0 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.774133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.774147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe6630 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.774157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe6630 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.774403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.774418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd899c0 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.774427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd899c0 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.774671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.774686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd81d10 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.774696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd81d10 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.775308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.775332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66020 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.775343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66020 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.775514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.775529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd7de0 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.775538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7de0 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.775785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb6120 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.775809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb6120 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.776008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.776022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b7610 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.776031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b7610 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.776048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd59bc0 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6630 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd899c0 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd81d10 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7eac0 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:47.022 [2024-07-15 12:53:38.776178] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:47.022 [2024-07-15 12:53:38.776193] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:47.022 [2024-07-15 12:53:38.776206] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:47.022 [2024-07-15 12:53:38.776591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66020 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7de0 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6120 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b7610 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.776644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:47.022 [2024-07-15 12:53:38.776824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.776832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.776840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.776848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.776864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.022 [2024-07-15 12:53:38.776955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.776963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.776973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:47.022 [2024-07-15 12:53:38.777014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.777024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.777032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.777040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.022 [2024-07-15 12:53:38.777321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.022 [2024-07-15 12:53:38.777337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd77250 with addr=10.0.0.2, port=4420 00:23:47.022 [2024-07-15 12:53:38.777347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77250 is same with the state(5) to be set 00:23:47.022 [2024-07-15 12:53:38.777383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77250 (9): Bad file descriptor 00:23:47.022 [2024-07-15 12:53:38.777419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:47.022 [2024-07-15 12:53:38.777428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:47.022 [2024-07-15 12:53:38.777437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:47.023 [2024-07-15 12:53:38.777470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.626 12:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:47.626 12:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4013863 00:23:48.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4013863) - No such process 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.564 rmmod nvme_tcp 00:23:48.564 rmmod nvme_fabrics 00:23:48.564 rmmod nvme_keyring 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.564 12:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.099 00:23:51.099 real 0m7.681s 00:23:51.099 user 0m18.435s 00:23:51.099 sys 0m1.447s 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 ************************************ 00:23:51.099 END TEST nvmf_shutdown_tc3 00:23:51.099 ************************************ 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:51.099 00:23:51.099 real 0m32.968s 00:23:51.099 user 1m23.519s 00:23:51.099 sys 0m8.988s 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.099 12:53:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 ************************************ 00:23:51.099 END TEST nvmf_shutdown 00:23:51.099 ************************************ 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:51.099 12:53:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 12:53:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 12:53:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:51.099 12:53:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.099 12:53:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 ************************************ 00:23:51.099 START TEST nvmf_multicontroller 00:23:51.099 ************************************ 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:51.099 * Looking for test storage... 00:23:51.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.099 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.100 12:53:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.373 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:56.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:56.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:56.374 Found net devices under 0000:af:00.0: cvl_0_0 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:56.374 Found net devices under 0000:af:00.1: cvl_0_1 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.374 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:23:56.632 00:23:56.632 --- 10.0.0.2 ping statistics --- 00:23:56.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.632 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:23:56.632 00:23:56.632 --- 10.0.0.1 ping statistics --- 00:23:56.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.632 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:23:56.632 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4018335 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4018335 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4018335 ']' 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.633 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.633 [2024-07-15 12:53:48.568432] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:56.633 [2024-07-15 12:53:48.568489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.891 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.891 [2024-07-15 12:53:48.657300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:56.891 [2024-07-15 12:53:48.764713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.891 [2024-07-15 12:53:48.764759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.891 [2024-07-15 12:53:48.764772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.891 [2024-07-15 12:53:48.764784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.891 [2024-07-15 12:53:48.764793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.891 [2024-07-15 12:53:48.764858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.891 [2024-07-15 12:53:48.764949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.891 [2024-07-15 12:53:48.764951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 [2024-07-15 12:53:48.943798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 Malloc0 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 [2024-07-15 12:53:49.010800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.148 [2024-07-15 12:53:49.018733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:57.148 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 Malloc1 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4018428 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4018428 /var/tmp/bdevperf.sock 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4018428 ']' 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.149 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.715 NVMe0n1 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.715 1 00:23:57.715 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.974 request: 00:23:57.974 { 00:23:57.974 "name": "NVMe0", 00:23:57.974 "trtype": "tcp", 00:23:57.974 "traddr": "10.0.0.2", 00:23:57.974 "adrfam": "ipv4", 00:23:57.974 "trsvcid": "4420", 00:23:57.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.974 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:57.974 "hostaddr": "10.0.0.2", 00:23:57.974 "hostsvcid": "60000", 00:23:57.974 "prchk_reftag": false, 00:23:57.974 "prchk_guard": false, 00:23:57.974 "hdgst": false, 00:23:57.974 "ddgst": false, 00:23:57.974 "method": "bdev_nvme_attach_controller", 00:23:57.974 "req_id": 1 00:23:57.974 } 00:23:57.974 Got JSON-RPC error response 00:23:57.974 response: 00:23:57.974 { 00:23:57.974 "code": -114, 00:23:57.974 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:57.974 } 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.974 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.975 request: 00:23:57.975 { 00:23:57.975 "name": "NVMe0", 00:23:57.975 "trtype": "tcp", 00:23:57.975 "traddr": "10.0.0.2", 00:23:57.975 "adrfam": "ipv4", 00:23:57.975 "trsvcid": "4420", 00:23:57.975 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:57.975 "hostaddr": "10.0.0.2", 00:23:57.975 "hostsvcid": "60000", 00:23:57.975 "prchk_reftag": false, 00:23:57.975 "prchk_guard": false, 00:23:57.975 "hdgst": false, 00:23:57.975 "ddgst": false, 00:23:57.975 "method": "bdev_nvme_attach_controller", 00:23:57.975 "req_id": 1 00:23:57.975 } 00:23:57.975 Got JSON-RPC error response 00:23:57.975 response: 00:23:57.975 { 00:23:57.975 "code": -114, 00:23:57.975 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:57.975 } 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.975 request: 00:23:57.975 { 00:23:57.975 "name": "NVMe0", 00:23:57.975 "trtype": "tcp", 00:23:57.975 "traddr": "10.0.0.2", 00:23:57.975 "adrfam": "ipv4", 00:23:57.975 "trsvcid": "4420", 00:23:57.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.975 "hostaddr": "10.0.0.2", 00:23:57.975 "hostsvcid": "60000", 00:23:57.975 "prchk_reftag": false, 00:23:57.975 "prchk_guard": false, 00:23:57.975 "hdgst": false, 00:23:57.975 "ddgst": false, 00:23:57.975 "multipath": "disable", 00:23:57.975 "method": "bdev_nvme_attach_controller", 00:23:57.975 "req_id": 1 00:23:57.975 } 00:23:57.975 Got JSON-RPC error response 00:23:57.975 response: 00:23:57.975 { 00:23:57.975 "code": -114, 00:23:57.975 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:57.975 } 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.975 request: 00:23:57.975 { 00:23:57.975 "name": "NVMe0", 00:23:57.975 "trtype": "tcp", 00:23:57.975 "traddr": "10.0.0.2", 00:23:57.975 "adrfam": "ipv4", 00:23:57.975 "trsvcid": "4420", 00:23:57.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.975 "hostaddr": "10.0.0.2", 00:23:57.975 "hostsvcid": "60000", 00:23:57.975 "prchk_reftag": false, 00:23:57.975 "prchk_guard": false, 00:23:57.975 "hdgst": false, 00:23:57.975 "ddgst": false, 00:23:57.975 "multipath": "failover", 00:23:57.975 "method": "bdev_nvme_attach_controller", 00:23:57.975 "req_id": 1 00:23:57.975 } 00:23:57.975 Got JSON-RPC error response 00:23:57.975 response: 00:23:57.975 { 00:23:57.975 "code": -114, 00:23:57.975 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:57.975 } 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.975 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.975 12:53:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.234 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:58.234 12:53:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.610 0 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4018428 ']' 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4018428' 00:23:59.610 killing process with pid 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4018428 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:59.610 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:59.869 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:59.869 [2024-07-15 12:53:49.127030] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:23:59.869 [2024-07-15 12:53:49.127095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018428 ] 00:23:59.869 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.869 [2024-07-15 12:53:49.209726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.869 [2024-07-15 12:53:49.300890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.869 [2024-07-15 12:53:50.080767] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name ffa972c3-99a5-4010-b5dd-587119648847 already exists 00:23:59.869 [2024-07-15 12:53:50.080808] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:ffa972c3-99a5-4010-b5dd-587119648847 alias for bdev NVMe1n1 00:23:59.869 [2024-07-15 12:53:50.080820] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:59.869 Running I/O for 1 seconds... 00:23:59.869 00:23:59.869 Latency(us) 00:23:59.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.869 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:59.869 NVMe0n1 : 1.01 7832.21 30.59 0.00 0.00 16300.18 8996.31 29193.31 00:23:59.869 =================================================================================================================== 00:23:59.869 Total : 7832.21 30.59 0.00 0.00 16300.18 8996.31 29193.31 00:23:59.869 Received shutdown signal, test time was about 1.000000 seconds 00:23:59.869 00:23:59.869 Latency(us) 00:23:59.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.869 =================================================================================================================== 00:23:59.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.869 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.869 rmmod nvme_tcp 00:23:59.869 rmmod nvme_fabrics 00:23:59.869 rmmod nvme_keyring 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4018335 ']' 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4018335 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4018335 ']' 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4018335 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4018335 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4018335' 00:23:59.869 killing process with pid 4018335 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4018335 00:23:59.869 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4018335 00:24:00.128 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.128 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.129 12:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.662 12:53:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.662 00:24:02.662 real 0m11.451s 00:24:02.662 user 0m13.749s 00:24:02.662 sys 0m5.169s 00:24:02.662 12:53:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.662 12:53:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.662 ************************************ 00:24:02.662 END TEST nvmf_multicontroller 00:24:02.662 ************************************ 00:24:02.662 12:53:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.662 12:53:54 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:02.662 12:53:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.662 12:53:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.662 12:53:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.662 ************************************ 00:24:02.662 START TEST nvmf_aer 00:24:02.662 ************************************ 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:02.662 * Looking for test storage... 00:24:02.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.662 12:53:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:07.938 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:07.938 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:07.938 Found net devices under 0000:af:00.0: cvl_0_0 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:07.938 Found net devices under 0000:af:00.1: cvl_0_1 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.938 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:08.198 00:24:08.198 --- 10.0.0.2 ping statistics --- 00:24:08.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.198 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:08.198 12:53:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:08.198 00:24:08.198 --- 10.0.0.1 ping statistics --- 00:24:08.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.198 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4022436 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4022436 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 4022436 ']' 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.198 12:54:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:08.198 [2024-07-15 12:54:00.112207] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:08.198 [2024-07-15 12:54:00.112279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.456 [2024-07-15 12:54:00.200959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.456 [2024-07-15 12:54:00.291026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.456 [2024-07-15 12:54:00.291068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.456 [2024-07-15 12:54:00.291078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.456 [2024-07-15 12:54:00.291087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.456 [2024-07-15 12:54:00.291094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.456 [2024-07-15 12:54:00.291156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.456 [2024-07-15 12:54:00.291312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.456 [2024-07-15 12:54:00.291372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.456 [2024-07-15 12:54:00.291371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.392 [2024-07-15 12:54:01.103499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.392 Malloc0 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.392 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.393 [2024-07-15 12:54:01.163194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.393 [ 00:24:09.393 { 00:24:09.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.393 "subtype": "Discovery", 00:24:09.393 "listen_addresses": [], 00:24:09.393 "allow_any_host": true, 00:24:09.393 "hosts": [] 00:24:09.393 }, 00:24:09.393 { 00:24:09.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.393 "subtype": "NVMe", 00:24:09.393 "listen_addresses": [ 00:24:09.393 { 00:24:09.393 "trtype": "TCP", 00:24:09.393 "adrfam": "IPv4", 00:24:09.393 "traddr": "10.0.0.2", 00:24:09.393 "trsvcid": "4420" 00:24:09.393 } 00:24:09.393 ], 00:24:09.393 "allow_any_host": true, 00:24:09.393 "hosts": [], 00:24:09.393 "serial_number": "SPDK00000000000001", 00:24:09.393 "model_number": "SPDK bdev Controller", 00:24:09.393 "max_namespaces": 2, 00:24:09.393 "min_cntlid": 1, 00:24:09.393 "max_cntlid": 65519, 00:24:09.393 "namespaces": [ 00:24:09.393 { 00:24:09.393 "nsid": 1, 00:24:09.393 "bdev_name": "Malloc0", 00:24:09.393 "name": "Malloc0", 00:24:09.393 "nguid": "D96B86870EF7476F83720A361C104E11", 00:24:09.393 "uuid": "d96b8687-0ef7-476f-8372-0a361c104e11" 00:24:09.393 } 00:24:09.393 ] 00:24:09.393 } 00:24:09.393 ] 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4022780 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:09.393 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:09.393 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.652 Malloc1 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.652 [ 00:24:09.652 { 00:24:09.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.652 "subtype": "Discovery", 00:24:09.652 "listen_addresses": [], 00:24:09.652 "allow_any_host": true, 00:24:09.652 "hosts": [] 00:24:09.652 }, 00:24:09.652 { 00:24:09.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.652 "subtype": "NVMe", 00:24:09.652 "listen_addresses": [ 00:24:09.652 { 00:24:09.652 "trtype": "TCP", 00:24:09.652 "adrfam": "IPv4", 00:24:09.652 "traddr": "10.0.0.2", 00:24:09.652 "trsvcid": "4420" 00:24:09.652 } 00:24:09.652 ], 00:24:09.652 "allow_any_host": true, 00:24:09.652 "hosts": [], 00:24:09.652 "serial_number": "SPDK00000000000001", 00:24:09.652 "model_number": "SPDK bdev Controller", 00:24:09.652 "max_namespaces": 2, 00:24:09.652 "min_cntlid": 1, 00:24:09.652 "max_cntlid": 65519, 00:24:09.652 "namespaces": [ 00:24:09.652 { 00:24:09.652 "nsid": 1, 00:24:09.652 "bdev_name": "Malloc0", 00:24:09.652 "name": "Malloc0", 00:24:09.652 "nguid": "D96B86870EF7476F83720A361C104E11", 00:24:09.652 "uuid": "d96b8687-0ef7-476f-8372-0a361c104e11" 00:24:09.652 }, 00:24:09.652 { 00:24:09.652 "nsid": 2, 00:24:09.652 "bdev_name": "Malloc1", 00:24:09.652 "name": "Malloc1", 00:24:09.652 "nguid": "4AF183B3F7864DB6B82C555E2F3698BD", 00:24:09.652 "uuid": "4af183b3-f786-4db6-b82c-555e2f3698bd" 00:24:09.652 } 00:24:09.652 ] 00:24:09.652 } 00:24:09.652 ] 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4022780 00:24:09.652 Asynchronous Event Request test 00:24:09.652 Attaching to 10.0.0.2 00:24:09.652 Attached to 10.0.0.2 00:24:09.652 Registering asynchronous event callbacks... 00:24:09.652 Starting namespace attribute notice tests for all controllers... 00:24:09.652 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:09.652 aer_cb - Changed Namespace 00:24:09.652 Cleaning up... 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.652 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.911 rmmod nvme_tcp 00:24:09.911 rmmod nvme_fabrics 00:24:09.911 rmmod nvme_keyring 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4022436 ']' 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4022436 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 4022436 ']' 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 4022436 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4022436 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4022436' 00:24:09.911 killing process with pid 4022436 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 4022436 00:24:09.911 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 4022436 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.170 12:54:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.701 12:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.701 00:24:12.701 real 0m9.929s 00:24:12.701 user 0m8.332s 00:24:12.701 sys 0m4.942s 00:24:12.701 12:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.701 12:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.701 ************************************ 00:24:12.701 END TEST nvmf_aer 00:24:12.701 ************************************ 00:24:12.701 12:54:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.701 12:54:04 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:12.701 12:54:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.701 12:54:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.701 12:54:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.701 ************************************ 00:24:12.701 START TEST nvmf_async_init 00:24:12.701 ************************************ 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:12.701 * Looking for test storage... 00:24:12.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=be40ce4fa4974952b87dd5ec0f480ea3 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.701 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.702 12:54:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.976 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:17.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:17.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:17.977 Found net devices under 0000:af:00.0: cvl_0_0 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:17.977 Found net devices under 0000:af:00.1: cvl_0_1 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.977 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.237 12:54:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.237 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:24:18.498 00:24:18.498 --- 10.0.0.2 ping statistics --- 00:24:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.498 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:24:18.498 00:24:18.498 --- 10.0.0.1 ping statistics --- 00:24:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.498 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4026923 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4026923 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 4026923 ']' 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.498 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.498 [2024-07-15 12:54:10.287369] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:18.498 [2024-07-15 12:54:10.287425] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.498 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.498 [2024-07-15 12:54:10.362581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.758 [2024-07-15 12:54:10.451872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.758 [2024-07-15 12:54:10.451912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.758 [2024-07-15 12:54:10.451922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.758 [2024-07-15 12:54:10.451930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.758 [2024-07-15 12:54:10.451937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.758 [2024-07-15 12:54:10.451959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 [2024-07-15 12:54:10.595413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 null0 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g be40ce4fa4974952b87dd5ec0f480ea3 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 [2024-07-15 12:54:10.635620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.758 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.016 nvme0n1 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 [ 00:24:19.017 { 00:24:19.017 "name": "nvme0n1", 00:24:19.017 "aliases": [ 00:24:19.017 "be40ce4f-a497-4952-b87d-d5ec0f480ea3" 00:24:19.017 ], 00:24:19.017 "product_name": "NVMe disk", 00:24:19.017 "block_size": 512, 00:24:19.017 "num_blocks": 2097152, 00:24:19.017 "uuid": "be40ce4f-a497-4952-b87d-d5ec0f480ea3", 00:24:19.017 "assigned_rate_limits": { 00:24:19.017 "rw_ios_per_sec": 0, 00:24:19.017 "rw_mbytes_per_sec": 0, 00:24:19.017 "r_mbytes_per_sec": 0, 00:24:19.017 "w_mbytes_per_sec": 0 00:24:19.017 }, 00:24:19.017 "claimed": false, 00:24:19.017 "zoned": false, 00:24:19.017 "supported_io_types": { 00:24:19.017 "read": true, 00:24:19.017 "write": true, 00:24:19.017 "unmap": false, 00:24:19.017 "flush": true, 00:24:19.017 "reset": true, 00:24:19.017 "nvme_admin": true, 00:24:19.017 "nvme_io": true, 00:24:19.017 "nvme_io_md": false, 00:24:19.017 "write_zeroes": true, 00:24:19.017 "zcopy": false, 00:24:19.017 "get_zone_info": false, 00:24:19.017 "zone_management": false, 00:24:19.017 "zone_append": false, 00:24:19.017 "compare": true, 00:24:19.017 "compare_and_write": true, 00:24:19.017 "abort": true, 00:24:19.017 "seek_hole": false, 00:24:19.017 "seek_data": false, 00:24:19.017 "copy": true, 00:24:19.017 "nvme_iov_md": false 00:24:19.017 }, 00:24:19.017 "memory_domains": [ 00:24:19.017 { 00:24:19.017 "dma_device_id": "system", 00:24:19.017 "dma_device_type": 1 00:24:19.017 } 00:24:19.017 ], 00:24:19.017 "driver_specific": { 00:24:19.017 "nvme": [ 00:24:19.017 { 00:24:19.017 "trid": { 00:24:19.017 "trtype": "TCP", 00:24:19.017 "adrfam": "IPv4", 00:24:19.017 "traddr": "10.0.0.2", 00:24:19.017 "trsvcid": "4420", 00:24:19.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:19.017 }, 00:24:19.017 "ctrlr_data": { 00:24:19.017 "cntlid": 1, 00:24:19.017 "vendor_id": "0x8086", 00:24:19.017 "model_number": "SPDK bdev Controller", 00:24:19.017 "serial_number": "00000000000000000000", 00:24:19.017 "firmware_revision": "24.09", 00:24:19.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.017 "oacs": { 00:24:19.017 "security": 0, 00:24:19.017 "format": 0, 00:24:19.017 "firmware": 0, 00:24:19.017 "ns_manage": 0 00:24:19.017 }, 00:24:19.017 "multi_ctrlr": true, 00:24:19.017 "ana_reporting": false 00:24:19.017 }, 00:24:19.017 "vs": { 00:24:19.017 "nvme_version": "1.3" 00:24:19.017 }, 00:24:19.017 "ns_data": { 00:24:19.017 "id": 1, 00:24:19.017 "can_share": true 00:24:19.017 } 00:24:19.017 } 00:24:19.017 ], 00:24:19.017 "mp_policy": "active_passive" 00:24:19.017 } 00:24:19.017 } 00:24:19.017 ] 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 12:54:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 [2024-07-15 12:54:10.892678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.017 [2024-07-15 12:54:10.892748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09b50 (9): Bad file descriptor 00:24:19.276 [2024-07-15 12:54:11.024374] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:19.276 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.276 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:19.276 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.276 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.276 [ 00:24:19.276 { 00:24:19.276 "name": "nvme0n1", 00:24:19.276 "aliases": [ 00:24:19.276 "be40ce4f-a497-4952-b87d-d5ec0f480ea3" 00:24:19.276 ], 00:24:19.276 "product_name": "NVMe disk", 00:24:19.276 "block_size": 512, 00:24:19.276 "num_blocks": 2097152, 00:24:19.276 "uuid": "be40ce4f-a497-4952-b87d-d5ec0f480ea3", 00:24:19.276 "assigned_rate_limits": { 00:24:19.276 "rw_ios_per_sec": 0, 00:24:19.276 "rw_mbytes_per_sec": 0, 00:24:19.276 "r_mbytes_per_sec": 0, 00:24:19.276 "w_mbytes_per_sec": 0 00:24:19.276 }, 00:24:19.276 "claimed": false, 00:24:19.276 "zoned": false, 00:24:19.276 "supported_io_types": { 00:24:19.276 "read": true, 00:24:19.276 "write": true, 00:24:19.276 "unmap": false, 00:24:19.276 "flush": true, 00:24:19.276 "reset": true, 00:24:19.276 "nvme_admin": true, 00:24:19.276 "nvme_io": true, 00:24:19.276 "nvme_io_md": false, 00:24:19.276 "write_zeroes": true, 00:24:19.276 "zcopy": false, 00:24:19.276 "get_zone_info": false, 00:24:19.276 "zone_management": false, 00:24:19.276 "zone_append": false, 00:24:19.276 "compare": true, 00:24:19.276 "compare_and_write": true, 00:24:19.276 "abort": true, 00:24:19.276 "seek_hole": false, 00:24:19.276 "seek_data": false, 00:24:19.276 "copy": true, 00:24:19.276 "nvme_iov_md": false 00:24:19.276 }, 00:24:19.276 "memory_domains": [ 00:24:19.276 { 00:24:19.276 "dma_device_id": "system", 00:24:19.276 "dma_device_type": 1 00:24:19.276 } 00:24:19.276 ], 00:24:19.276 "driver_specific": { 00:24:19.276 "nvme": [ 00:24:19.276 { 00:24:19.276 "trid": { 00:24:19.276 "trtype": "TCP", 00:24:19.276 "adrfam": "IPv4", 00:24:19.276 "traddr": "10.0.0.2", 00:24:19.277 "trsvcid": "4420", 00:24:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:19.277 }, 00:24:19.277 "ctrlr_data": { 00:24:19.277 "cntlid": 2, 00:24:19.277 "vendor_id": "0x8086", 00:24:19.277 "model_number": "SPDK bdev Controller", 00:24:19.277 "serial_number": "00000000000000000000", 00:24:19.277 "firmware_revision": "24.09", 00:24:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.277 "oacs": { 00:24:19.277 "security": 0, 00:24:19.277 "format": 0, 00:24:19.277 "firmware": 0, 00:24:19.277 "ns_manage": 0 00:24:19.277 }, 00:24:19.277 "multi_ctrlr": true, 00:24:19.277 "ana_reporting": false 00:24:19.277 }, 00:24:19.277 "vs": { 00:24:19.277 "nvme_version": "1.3" 00:24:19.277 }, 00:24:19.277 "ns_data": { 00:24:19.277 "id": 1, 00:24:19.277 "can_share": true 00:24:19.277 } 00:24:19.277 } 00:24:19.277 ], 00:24:19.277 "mp_policy": "active_passive" 00:24:19.277 } 00:24:19.277 } 00:24:19.277 ] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.h4nVFI62xH 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.h4nVFI62xH 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 [2024-07-15 12:54:11.085343] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.277 [2024-07-15 12:54:11.085468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h4nVFI62xH 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 [2024-07-15 12:54:11.093353] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h4nVFI62xH 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 [2024-07-15 12:54:11.105415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.277 [2024-07-15 12:54:11.105458] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:19.277 nvme0n1 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.277 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.277 [ 00:24:19.277 { 00:24:19.277 "name": "nvme0n1", 00:24:19.277 "aliases": [ 00:24:19.277 "be40ce4f-a497-4952-b87d-d5ec0f480ea3" 00:24:19.277 ], 00:24:19.277 "product_name": "NVMe disk", 00:24:19.277 "block_size": 512, 00:24:19.277 "num_blocks": 2097152, 00:24:19.277 "uuid": "be40ce4f-a497-4952-b87d-d5ec0f480ea3", 00:24:19.277 "assigned_rate_limits": { 00:24:19.277 "rw_ios_per_sec": 0, 00:24:19.277 "rw_mbytes_per_sec": 0, 00:24:19.277 "r_mbytes_per_sec": 0, 00:24:19.277 "w_mbytes_per_sec": 0 00:24:19.277 }, 00:24:19.277 "claimed": false, 00:24:19.277 "zoned": false, 00:24:19.277 "supported_io_types": { 00:24:19.277 "read": true, 00:24:19.277 "write": true, 00:24:19.277 "unmap": false, 00:24:19.277 "flush": true, 00:24:19.277 "reset": true, 00:24:19.277 "nvme_admin": true, 00:24:19.277 "nvme_io": true, 00:24:19.277 "nvme_io_md": false, 00:24:19.277 "write_zeroes": true, 00:24:19.277 "zcopy": false, 00:24:19.277 "get_zone_info": false, 00:24:19.277 "zone_management": false, 00:24:19.277 "zone_append": false, 00:24:19.277 "compare": true, 00:24:19.277 "compare_and_write": true, 00:24:19.277 "abort": true, 00:24:19.277 "seek_hole": false, 00:24:19.277 "seek_data": false, 00:24:19.277 "copy": true, 00:24:19.277 "nvme_iov_md": false 00:24:19.277 }, 00:24:19.277 "memory_domains": [ 00:24:19.277 { 00:24:19.277 "dma_device_id": "system", 00:24:19.277 "dma_device_type": 1 00:24:19.277 } 00:24:19.277 ], 00:24:19.277 "driver_specific": { 00:24:19.277 "nvme": [ 00:24:19.277 { 00:24:19.277 "trid": { 00:24:19.277 "trtype": "TCP", 00:24:19.277 "adrfam": "IPv4", 00:24:19.277 "traddr": "10.0.0.2", 00:24:19.277 "trsvcid": "4421", 00:24:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:19.277 }, 00:24:19.277 "ctrlr_data": { 00:24:19.277 "cntlid": 3, 00:24:19.277 "vendor_id": "0x8086", 00:24:19.277 "model_number": "SPDK bdev Controller", 00:24:19.277 "serial_number": "00000000000000000000", 00:24:19.277 "firmware_revision": "24.09", 00:24:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.277 "oacs": { 00:24:19.277 "security": 0, 00:24:19.277 "format": 0, 00:24:19.277 "firmware": 0, 00:24:19.277 "ns_manage": 0 00:24:19.277 }, 00:24:19.277 "multi_ctrlr": true, 00:24:19.278 "ana_reporting": false 00:24:19.278 }, 00:24:19.278 "vs": { 00:24:19.278 "nvme_version": "1.3" 00:24:19.278 }, 00:24:19.278 "ns_data": { 00:24:19.278 "id": 1, 00:24:19.278 "can_share": true 00:24:19.278 } 00:24:19.278 } 00:24:19.278 ], 00:24:19.278 "mp_policy": "active_passive" 00:24:19.278 } 00:24:19.278 } 00:24:19.278 ] 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.h4nVFI62xH 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.278 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.537 rmmod nvme_tcp 00:24:19.537 rmmod nvme_fabrics 00:24:19.537 rmmod nvme_keyring 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4026923 ']' 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4026923 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 4026923 ']' 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 4026923 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026923 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026923' 00:24:19.537 killing process with pid 4026923 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 4026923 00:24:19.537 [2024-07-15 12:54:11.324681] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:19.537 [2024-07-15 12:54:11.324711] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:19.537 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 4026923 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.795 12:54:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.699 12:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.699 00:24:21.699 real 0m9.483s 00:24:21.699 user 0m2.990s 00:24:21.699 sys 0m4.929s 00:24:21.699 12:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.699 12:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:21.699 ************************************ 00:24:21.699 END TEST nvmf_async_init 00:24:21.699 ************************************ 00:24:21.699 12:54:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:21.699 12:54:13 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:21.699 12:54:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:21.699 12:54:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.699 12:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.958 ************************************ 00:24:21.958 START TEST dma 00:24:21.958 ************************************ 00:24:21.958 12:54:13 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:21.958 * Looking for test storage... 00:24:21.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.958 12:54:13 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.958 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.958 12:54:13 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.958 12:54:13 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.958 12:54:13 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.958 12:54:13 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.959 12:54:13 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.959 12:54:13 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.959 12:54:13 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:21.959 12:54:13 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.959 12:54:13 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.959 12:54:13 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:21.959 12:54:13 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:21.959 00:24:21.959 real 0m0.122s 00:24:21.959 user 0m0.056s 00:24:21.959 sys 0m0.075s 00:24:21.959 12:54:13 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.959 12:54:13 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:21.959 ************************************ 00:24:21.959 END TEST dma 00:24:21.959 ************************************ 00:24:21.959 12:54:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:21.959 12:54:13 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:21.959 12:54:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:21.959 12:54:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.959 12:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.959 ************************************ 00:24:21.959 START TEST nvmf_identify 00:24:21.959 ************************************ 00:24:21.959 12:54:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:22.218 * Looking for test storage... 00:24:22.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.218 12:54:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:28.790 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:28.790 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:28.790 Found net devices under 0000:af:00.0: cvl_0_0 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:28.790 Found net devices under 0000:af:00.1: cvl_0_1 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:24:28.790 00:24:28.790 --- 10.0.0.2 ping statistics --- 00:24:28.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.790 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:24:28.790 00:24:28.790 --- 10.0.0.1 ping statistics --- 00:24:28.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.790 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4030723 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4030723 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 4030723 ']' 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.790 12:54:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:28.790 [2024-07-15 12:54:19.880371] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:28.790 [2024-07-15 12:54:19.880427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.790 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.791 [2024-07-15 12:54:19.966224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.791 [2024-07-15 12:54:20.064369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.791 [2024-07-15 12:54:20.064408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.791 [2024-07-15 12:54:20.064419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.791 [2024-07-15 12:54:20.064427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.791 [2024-07-15 12:54:20.064435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.791 [2024-07-15 12:54:20.064485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.791 [2024-07-15 12:54:20.064596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.791 [2024-07-15 12:54:20.064707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.791 [2024-07-15 12:54:20.064707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 [2024-07-15 12:54:20.845502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 Malloc0 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 [2024-07-15 12:54:20.945686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.050 [ 00:24:29.050 { 00:24:29.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:29.050 "subtype": "Discovery", 00:24:29.050 "listen_addresses": [ 00:24:29.050 { 00:24:29.050 "trtype": "TCP", 00:24:29.050 "adrfam": "IPv4", 00:24:29.050 "traddr": "10.0.0.2", 00:24:29.050 "trsvcid": "4420" 00:24:29.050 } 00:24:29.050 ], 00:24:29.050 "allow_any_host": true, 00:24:29.050 "hosts": [] 00:24:29.050 }, 00:24:29.050 { 00:24:29.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.050 "subtype": "NVMe", 00:24:29.050 "listen_addresses": [ 00:24:29.050 { 00:24:29.050 "trtype": "TCP", 00:24:29.050 "adrfam": "IPv4", 00:24:29.050 "traddr": "10.0.0.2", 00:24:29.050 "trsvcid": "4420" 00:24:29.050 } 00:24:29.050 ], 00:24:29.050 "allow_any_host": true, 00:24:29.050 "hosts": [], 00:24:29.050 "serial_number": "SPDK00000000000001", 00:24:29.050 "model_number": "SPDK bdev Controller", 00:24:29.050 "max_namespaces": 32, 00:24:29.050 "min_cntlid": 1, 00:24:29.050 "max_cntlid": 65519, 00:24:29.050 "namespaces": [ 00:24:29.050 { 00:24:29.050 "nsid": 1, 00:24:29.050 "bdev_name": "Malloc0", 00:24:29.050 "name": "Malloc0", 00:24:29.050 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:29.050 "eui64": "ABCDEF0123456789", 00:24:29.050 "uuid": "227762df-c1c3-4c50-a439-8259f98ca0dc" 00:24:29.050 } 00:24:29.050 ] 00:24:29.050 } 00:24:29.050 ] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.050 12:54:20 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:29.311 [2024-07-15 12:54:20.998176] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:29.311 [2024-07-15 12:54:20.998211] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031000 ] 00:24:29.311 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.311 [2024-07-15 12:54:21.035795] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:29.311 [2024-07-15 12:54:21.035858] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:29.311 [2024-07-15 12:54:21.035865] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:29.311 [2024-07-15 12:54:21.035879] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:29.311 [2024-07-15 12:54:21.035887] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:29.311 [2024-07-15 12:54:21.036214] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:29.311 [2024-07-15 12:54:21.036249] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1da3ec0 0 00:24:29.311 [2024-07-15 12:54:21.042267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:29.311 [2024-07-15 12:54:21.042282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:29.311 [2024-07-15 12:54:21.042288] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:29.311 [2024-07-15 12:54:21.042292] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:29.311 [2024-07-15 12:54:21.042337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.042344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.042349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.311 [2024-07-15 12:54:21.042365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:29.311 [2024-07-15 12:54:21.042385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.311 [2024-07-15 12:54:21.050270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.311 [2024-07-15 12:54:21.050283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.311 [2024-07-15 12:54:21.050287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.311 [2024-07-15 12:54:21.050305] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:29.311 [2024-07-15 12:54:21.050313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:29.311 [2024-07-15 12:54:21.050320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:29.311 [2024-07-15 12:54:21.050336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.311 [2024-07-15 12:54:21.050357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.311 [2024-07-15 12:54:21.050374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.311 [2024-07-15 12:54:21.050558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.311 [2024-07-15 12:54:21.050567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.311 [2024-07-15 12:54:21.050572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.311 [2024-07-15 12:54:21.050583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:29.311 [2024-07-15 12:54:21.050592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:29.311 [2024-07-15 12:54:21.050606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.311 [2024-07-15 12:54:21.050624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.311 [2024-07-15 12:54:21.050638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.311 [2024-07-15 12:54:21.050746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.311 [2024-07-15 12:54:21.050755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.311 [2024-07-15 12:54:21.050759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.311 [2024-07-15 12:54:21.050770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:29.311 [2024-07-15 12:54:21.050780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:29.311 [2024-07-15 12:54:21.050788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.311 [2024-07-15 12:54:21.050806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.311 [2024-07-15 12:54:21.050819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.311 [2024-07-15 12:54:21.050946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.311 [2024-07-15 12:54:21.050954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.311 [2024-07-15 12:54:21.050958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.311 [2024-07-15 12:54:21.050963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.311 [2024-07-15 12:54:21.050969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:29.311 [2024-07-15 12:54:21.050981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.050987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.050991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.050999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.051013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.051124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.051133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.051137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.051148] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:29.312 [2024-07-15 12:54:21.051154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:29.312 [2024-07-15 12:54:21.051164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:29.312 [2024-07-15 12:54:21.051273] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:29.312 [2024-07-15 12:54:21.051281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:29.312 [2024-07-15 12:54:21.051291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.051309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.051324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.051435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.051445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.051449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.051461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:29.312 [2024-07-15 12:54:21.051474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.051492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.051505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.051612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.051621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.051626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.051639] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:29.312 [2024-07-15 12:54:21.051646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.051656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:29.312 [2024-07-15 12:54:21.051667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.051680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.051693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.051708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.051883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.312 [2024-07-15 12:54:21.051891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.312 [2024-07-15 12:54:21.051896] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051901] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da3ec0): datao=0, datal=4096, cccid=0 00:24:29.312 [2024-07-15 12:54:21.051910] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e26e40) on tqpair(0x1da3ec0): expected_datao=0, payload_size=4096 00:24:29.312 [2024-07-15 12:54:21.051915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051925] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051931] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.051963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.051967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.051972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.051981] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:29.312 [2024-07-15 12:54:21.051991] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:29.312 [2024-07-15 12:54:21.051997] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:29.312 [2024-07-15 12:54:21.052004] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:29.312 [2024-07-15 12:54:21.052010] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:29.312 [2024-07-15 12:54:21.052017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.052028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.052037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.312 [2024-07-15 12:54:21.052070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.052179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.052187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.052192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.052206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.312 [2024-07-15 12:54:21.052231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.312 [2024-07-15 12:54:21.052263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.312 [2024-07-15 12:54:21.052290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.312 [2024-07-15 12:54:21.052314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.052328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:29.312 [2024-07-15 12:54:21.052336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.052365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26e40, cid 0, qid 0 00:24:29.312 [2024-07-15 12:54:21.052372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e26fc0, cid 1, qid 0 00:24:29.312 [2024-07-15 12:54:21.052378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27140, cid 2, qid 0 00:24:29.312 [2024-07-15 12:54:21.052384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.312 [2024-07-15 12:54:21.052390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27440, cid 4, qid 0 00:24:29.312 [2024-07-15 12:54:21.052562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.312 [2024-07-15 12:54:21.052571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.312 [2024-07-15 12:54:21.052576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27440) on tqpair=0x1da3ec0 00:24:29.312 [2024-07-15 12:54:21.052587] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:29.312 [2024-07-15 12:54:21.052593] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:29.312 [2024-07-15 12:54:21.052606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.312 [2024-07-15 12:54:21.052612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da3ec0) 00:24:29.312 [2024-07-15 12:54:21.052620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.312 [2024-07-15 12:54:21.052634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27440, cid 4, qid 0 00:24:29.312 [2024-07-15 12:54:21.052770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.313 [2024-07-15 12:54:21.052779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.313 [2024-07-15 12:54:21.052784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052788] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da3ec0): datao=0, datal=4096, cccid=4 00:24:29.313 [2024-07-15 12:54:21.052794] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e27440) on tqpair(0x1da3ec0): expected_datao=0, payload_size=4096 00:24:29.313 [2024-07-15 12:54:21.052799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052811] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052816] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.313 [2024-07-15 12:54:21.052850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.313 [2024-07-15 12:54:21.052855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27440) on tqpair=0x1da3ec0 00:24:29.313 [2024-07-15 12:54:21.052875] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:29.313 [2024-07-15 12:54:21.052902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da3ec0) 00:24:29.313 [2024-07-15 12:54:21.052917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.313 [2024-07-15 12:54:21.052925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.052934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1da3ec0) 00:24:29.313 [2024-07-15 12:54:21.052942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.313 [2024-07-15 12:54:21.052960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27440, cid 4, qid 0 00:24:29.313 [2024-07-15 12:54:21.052968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e275c0, cid 5, qid 0 00:24:29.313 [2024-07-15 12:54:21.053131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.313 [2024-07-15 12:54:21.053140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.313 [2024-07-15 12:54:21.053145] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.053149] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da3ec0): datao=0, datal=1024, cccid=4 00:24:29.313 [2024-07-15 12:54:21.053155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e27440) on tqpair(0x1da3ec0): expected_datao=0, payload_size=1024 00:24:29.313 [2024-07-15 12:54:21.053161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.053169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.053174] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.053181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.313 [2024-07-15 12:54:21.053188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.313 [2024-07-15 12:54:21.053193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.053198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e275c0) on tqpair=0x1da3ec0 00:24:29.313 [2024-07-15 12:54:21.098267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.313 [2024-07-15 12:54:21.098282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.313 [2024-07-15 12:54:21.098287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.098292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27440) on tqpair=0x1da3ec0 00:24:29.313 [2024-07-15 12:54:21.098311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.098317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da3ec0) 00:24:29.313 [2024-07-15 12:54:21.098327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.313 [2024-07-15 12:54:21.098349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27440, cid 4, qid 0 00:24:29.313 [2024-07-15 12:54:21.098645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.313 [2024-07-15 12:54:21.098654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.313 [2024-07-15 12:54:21.098658] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.098663] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da3ec0): datao=0, datal=3072, cccid=4 00:24:29.313 [2024-07-15 12:54:21.098669] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e27440) on tqpair(0x1da3ec0): expected_datao=0, payload_size=3072 00:24:29.313 [2024-07-15 12:54:21.098674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.098691] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.098696] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.313 [2024-07-15 12:54:21.143280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.313 [2024-07-15 12:54:21.143285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27440) on tqpair=0x1da3ec0 00:24:29.313 [2024-07-15 12:54:21.143301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1da3ec0) 00:24:29.313 [2024-07-15 12:54:21.143316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.313 [2024-07-15 12:54:21.143337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e27440, cid 4, qid 0 00:24:29.313 [2024-07-15 12:54:21.143522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.313 [2024-07-15 12:54:21.143530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.313 [2024-07-15 12:54:21.143535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1da3ec0): datao=0, datal=8, cccid=4 00:24:29.313 [2024-07-15 12:54:21.143545] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e27440) on tqpair(0x1da3ec0): expected_datao=0, payload_size=8 00:24:29.313 [2024-07-15 12:54:21.143551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143559] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.143564] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.185420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.313 [2024-07-15 12:54:21.185434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.313 [2024-07-15 12:54:21.185438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.313 [2024-07-15 12:54:21.185443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27440) on tqpair=0x1da3ec0 00:24:29.313 ===================================================== 00:24:29.313 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:29.313 ===================================================== 00:24:29.313 Controller Capabilities/Features 00:24:29.313 ================================ 00:24:29.313 Vendor ID: 0000 00:24:29.313 Subsystem Vendor ID: 0000 00:24:29.313 Serial Number: .................... 00:24:29.313 Model Number: ........................................ 00:24:29.313 Firmware Version: 24.09 00:24:29.313 Recommended Arb Burst: 0 00:24:29.313 IEEE OUI Identifier: 00 00 00 00:24:29.313 Multi-path I/O 00:24:29.313 May have multiple subsystem ports: No 00:24:29.313 May have multiple controllers: No 00:24:29.313 Associated with SR-IOV VF: No 00:24:29.313 Max Data Transfer Size: 131072 00:24:29.313 Max Number of Namespaces: 0 00:24:29.313 Max Number of I/O Queues: 1024 00:24:29.313 NVMe Specification Version (VS): 1.3 00:24:29.313 NVMe Specification Version (Identify): 1.3 00:24:29.313 Maximum Queue Entries: 128 00:24:29.313 Contiguous Queues Required: Yes 00:24:29.313 Arbitration Mechanisms Supported 00:24:29.313 Weighted Round Robin: Not Supported 00:24:29.313 Vendor Specific: Not Supported 00:24:29.313 Reset Timeout: 15000 ms 00:24:29.313 Doorbell Stride: 4 bytes 00:24:29.313 NVM Subsystem Reset: Not Supported 00:24:29.313 Command Sets Supported 00:24:29.313 NVM Command Set: Supported 00:24:29.313 Boot Partition: Not Supported 00:24:29.313 Memory Page Size Minimum: 4096 bytes 00:24:29.313 Memory Page Size Maximum: 4096 bytes 00:24:29.313 Persistent Memory Region: Not Supported 00:24:29.313 Optional Asynchronous Events Supported 00:24:29.313 Namespace Attribute Notices: Not Supported 00:24:29.313 Firmware Activation Notices: Not Supported 00:24:29.313 ANA Change Notices: Not Supported 00:24:29.313 PLE Aggregate Log Change Notices: Not Supported 00:24:29.313 LBA Status Info Alert Notices: Not Supported 00:24:29.313 EGE Aggregate Log Change Notices: Not Supported 00:24:29.313 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.313 Zone Descriptor Change Notices: Not Supported 00:24:29.313 Discovery Log Change Notices: Supported 00:24:29.313 Controller Attributes 00:24:29.313 128-bit Host Identifier: Not Supported 00:24:29.313 Non-Operational Permissive Mode: Not Supported 00:24:29.313 NVM Sets: Not Supported 00:24:29.313 Read Recovery Levels: Not Supported 00:24:29.313 Endurance Groups: Not Supported 00:24:29.313 Predictable Latency Mode: Not Supported 00:24:29.313 Traffic Based Keep ALive: Not Supported 00:24:29.313 Namespace Granularity: Not Supported 00:24:29.314 SQ Associations: Not Supported 00:24:29.314 UUID List: Not Supported 00:24:29.314 Multi-Domain Subsystem: Not Supported 00:24:29.314 Fixed Capacity Management: Not Supported 00:24:29.314 Variable Capacity Management: Not Supported 00:24:29.314 Delete Endurance Group: Not Supported 00:24:29.314 Delete NVM Set: Not Supported 00:24:29.314 Extended LBA Formats Supported: Not Supported 00:24:29.314 Flexible Data Placement Supported: Not Supported 00:24:29.314 00:24:29.314 Controller Memory Buffer Support 00:24:29.314 ================================ 00:24:29.314 Supported: No 00:24:29.314 00:24:29.314 Persistent Memory Region Support 00:24:29.314 ================================ 00:24:29.314 Supported: No 00:24:29.314 00:24:29.314 Admin Command Set Attributes 00:24:29.314 ============================ 00:24:29.314 Security Send/Receive: Not Supported 00:24:29.314 Format NVM: Not Supported 00:24:29.314 Firmware Activate/Download: Not Supported 00:24:29.314 Namespace Management: Not Supported 00:24:29.314 Device Self-Test: Not Supported 00:24:29.314 Directives: Not Supported 00:24:29.314 NVMe-MI: Not Supported 00:24:29.314 Virtualization Management: Not Supported 00:24:29.314 Doorbell Buffer Config: Not Supported 00:24:29.314 Get LBA Status Capability: Not Supported 00:24:29.314 Command & Feature Lockdown Capability: Not Supported 00:24:29.314 Abort Command Limit: 1 00:24:29.314 Async Event Request Limit: 4 00:24:29.314 Number of Firmware Slots: N/A 00:24:29.314 Firmware Slot 1 Read-Only: N/A 00:24:29.314 Firmware Activation Without Reset: N/A 00:24:29.314 Multiple Update Detection Support: N/A 00:24:29.314 Firmware Update Granularity: No Information Provided 00:24:29.314 Per-Namespace SMART Log: No 00:24:29.314 Asymmetric Namespace Access Log Page: Not Supported 00:24:29.314 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:29.314 Command Effects Log Page: Not Supported 00:24:29.314 Get Log Page Extended Data: Supported 00:24:29.314 Telemetry Log Pages: Not Supported 00:24:29.314 Persistent Event Log Pages: Not Supported 00:24:29.314 Supported Log Pages Log Page: May Support 00:24:29.314 Commands Supported & Effects Log Page: Not Supported 00:24:29.314 Feature Identifiers & Effects Log Page:May Support 00:24:29.314 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.314 Data Area 4 for Telemetry Log: Not Supported 00:24:29.314 Error Log Page Entries Supported: 128 00:24:29.314 Keep Alive: Not Supported 00:24:29.314 00:24:29.314 NVM Command Set Attributes 00:24:29.314 ========================== 00:24:29.314 Submission Queue Entry Size 00:24:29.314 Max: 1 00:24:29.314 Min: 1 00:24:29.314 Completion Queue Entry Size 00:24:29.314 Max: 1 00:24:29.314 Min: 1 00:24:29.314 Number of Namespaces: 0 00:24:29.314 Compare Command: Not Supported 00:24:29.314 Write Uncorrectable Command: Not Supported 00:24:29.314 Dataset Management Command: Not Supported 00:24:29.314 Write Zeroes Command: Not Supported 00:24:29.314 Set Features Save Field: Not Supported 00:24:29.314 Reservations: Not Supported 00:24:29.314 Timestamp: Not Supported 00:24:29.314 Copy: Not Supported 00:24:29.314 Volatile Write Cache: Not Present 00:24:29.314 Atomic Write Unit (Normal): 1 00:24:29.314 Atomic Write Unit (PFail): 1 00:24:29.314 Atomic Compare & Write Unit: 1 00:24:29.314 Fused Compare & Write: Supported 00:24:29.314 Scatter-Gather List 00:24:29.314 SGL Command Set: Supported 00:24:29.314 SGL Keyed: Supported 00:24:29.314 SGL Bit Bucket Descriptor: Not Supported 00:24:29.314 SGL Metadata Pointer: Not Supported 00:24:29.314 Oversized SGL: Not Supported 00:24:29.314 SGL Metadata Address: Not Supported 00:24:29.314 SGL Offset: Supported 00:24:29.314 Transport SGL Data Block: Not Supported 00:24:29.314 Replay Protected Memory Block: Not Supported 00:24:29.314 00:24:29.314 Firmware Slot Information 00:24:29.314 ========================= 00:24:29.314 Active slot: 0 00:24:29.314 00:24:29.314 00:24:29.314 Error Log 00:24:29.314 ========= 00:24:29.314 00:24:29.314 Active Namespaces 00:24:29.314 ================= 00:24:29.314 Discovery Log Page 00:24:29.314 ================== 00:24:29.314 Generation Counter: 2 00:24:29.314 Number of Records: 2 00:24:29.314 Record Format: 0 00:24:29.314 00:24:29.314 Discovery Log Entry 0 00:24:29.314 ---------------------- 00:24:29.314 Transport Type: 3 (TCP) 00:24:29.314 Address Family: 1 (IPv4) 00:24:29.314 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:29.314 Entry Flags: 00:24:29.314 Duplicate Returned Information: 1 00:24:29.314 Explicit Persistent Connection Support for Discovery: 1 00:24:29.314 Transport Requirements: 00:24:29.314 Secure Channel: Not Required 00:24:29.314 Port ID: 0 (0x0000) 00:24:29.314 Controller ID: 65535 (0xffff) 00:24:29.314 Admin Max SQ Size: 128 00:24:29.314 Transport Service Identifier: 4420 00:24:29.314 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:29.314 Transport Address: 10.0.0.2 00:24:29.314 Discovery Log Entry 1 00:24:29.314 ---------------------- 00:24:29.314 Transport Type: 3 (TCP) 00:24:29.314 Address Family: 1 (IPv4) 00:24:29.314 Subsystem Type: 2 (NVM Subsystem) 00:24:29.314 Entry Flags: 00:24:29.314 Duplicate Returned Information: 0 00:24:29.314 Explicit Persistent Connection Support for Discovery: 0 00:24:29.314 Transport Requirements: 00:24:29.314 Secure Channel: Not Required 00:24:29.314 Port ID: 0 (0x0000) 00:24:29.314 Controller ID: 65535 (0xffff) 00:24:29.314 Admin Max SQ Size: 128 00:24:29.314 Transport Service Identifier: 4420 00:24:29.314 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:29.314 Transport Address: 10.0.0.2 [2024-07-15 12:54:21.185552] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:29.314 [2024-07-15 12:54:21.185565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26e40) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.314 [2024-07-15 12:54:21.185580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e26fc0) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.314 [2024-07-15 12:54:21.185593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e27140) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.314 [2024-07-15 12:54:21.185606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.314 [2024-07-15 12:54:21.185625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.314 [2024-07-15 12:54:21.185645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.314 [2024-07-15 12:54:21.185662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.314 [2024-07-15 12:54:21.185765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.314 [2024-07-15 12:54:21.185774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.314 [2024-07-15 12:54:21.185779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.314 [2024-07-15 12:54:21.185810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.314 [2024-07-15 12:54:21.185829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.314 [2024-07-15 12:54:21.185957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.314 [2024-07-15 12:54:21.185966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.314 [2024-07-15 12:54:21.185970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.185974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.314 [2024-07-15 12:54:21.185980] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:29.314 [2024-07-15 12:54:21.185986] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:29.314 [2024-07-15 12:54:21.185999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.186004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.186009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.314 [2024-07-15 12:54:21.186017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.314 [2024-07-15 12:54:21.186030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.314 [2024-07-15 12:54:21.186168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.314 [2024-07-15 12:54:21.186176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.314 [2024-07-15 12:54:21.186180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.314 [2024-07-15 12:54:21.186185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.186198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.186216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.186230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.186352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.186361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.186365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.186381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.186400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.186414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.186519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.186528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.186532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.186548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.186567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.186581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.186687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.186695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.186699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.186716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.186734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.186747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.186853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.186861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.186866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.186882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.186892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.186900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.186914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.187015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.187026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.187030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.187047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.187065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.187079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.187206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.187215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.187219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.187236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.187246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1da3ec0) 00:24:29.315 [2024-07-15 12:54:21.191260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.315 [2024-07-15 12:54:21.191278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e272c0, cid 3, qid 0 00:24:29.315 [2024-07-15 12:54:21.191477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.315 [2024-07-15 12:54:21.191485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.315 [2024-07-15 12:54:21.191490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.315 [2024-07-15 12:54:21.191494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e272c0) on tqpair=0x1da3ec0 00:24:29.315 [2024-07-15 12:54:21.191504] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:29.315 00:24:29.315 12:54:21 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:29.315 [2024-07-15 12:54:21.236041] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:29.315 [2024-07-15 12:54:21.236080] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031005 ] 00:24:29.315 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.577 [2024-07-15 12:54:21.273472] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:29.577 [2024-07-15 12:54:21.273528] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:29.577 [2024-07-15 12:54:21.273535] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:29.577 [2024-07-15 12:54:21.273548] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:29.577 [2024-07-15 12:54:21.273555] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:29.578 [2024-07-15 12:54:21.273818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:29.578 [2024-07-15 12:54:21.273850] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x802ec0 0 00:24:29.578 [2024-07-15 12:54:21.280266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:29.578 [2024-07-15 12:54:21.280279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:29.578 [2024-07-15 12:54:21.280284] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:29.578 [2024-07-15 12:54:21.280288] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:29.578 [2024-07-15 12:54:21.280324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.280331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.280336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.280350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:29.578 [2024-07-15 12:54:21.280368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.287266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.287278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.287282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.287302] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:29.578 [2024-07-15 12:54:21.287310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:29.578 [2024-07-15 12:54:21.287317] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:29.578 [2024-07-15 12:54:21.287331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.287351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.287368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.287623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.287631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.287636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.287646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:29.578 [2024-07-15 12:54:21.287656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:29.578 [2024-07-15 12:54:21.287665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.287683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.287697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.287869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.287878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.287885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.287896] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:29.578 [2024-07-15 12:54:21.287906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.287914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.287923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.287932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.287946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.288094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.288103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.288107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.288118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.288130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.288148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.288162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.288325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.288334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.288339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.288349] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:29.578 [2024-07-15 12:54:21.288354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.288365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.288471] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:29.578 [2024-07-15 12:54:21.288477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.288486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.288504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.288518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.288698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.288707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.288711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.288722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:29.578 [2024-07-15 12:54:21.288733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.288751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.288764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.288977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.288985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.288989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.288994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.288999] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:29.578 [2024-07-15 12:54:21.289005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:29.578 [2024-07-15 12:54:21.289030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.289069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.289280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.578 [2024-07-15 12:54:21.289289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.578 [2024-07-15 12:54:21.289293] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289298] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=4096, cccid=0 00:24:29.578 [2024-07-15 12:54:21.289304] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x885e40) on tqpair(0x802ec0): expected_datao=0, payload_size=4096 00:24:29.578 [2024-07-15 12:54:21.289309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289329] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289334] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.289421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.289425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.289439] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:29.578 [2024-07-15 12:54:21.289450] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:29.578 [2024-07-15 12:54:21.289456] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:29.578 [2024-07-15 12:54:21.289461] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:29.578 [2024-07-15 12:54:21.289467] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:29.578 [2024-07-15 12:54:21.289472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.578 [2024-07-15 12:54:21.289525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.289669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.289678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.289682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.289695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.578 [2024-07-15 12:54:21.289720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.578 [2024-07-15 12:54:21.289743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.578 [2024-07-15 12:54:21.289767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.578 [2024-07-15 12:54:21.289789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.289812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.289817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.289826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.578 [2024-07-15 12:54:21.289841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885e40, cid 0, qid 0 00:24:29.578 [2024-07-15 12:54:21.289848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x885fc0, cid 1, qid 0 00:24:29.578 [2024-07-15 12:54:21.289854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886140, cid 2, qid 0 00:24:29.578 [2024-07-15 12:54:21.289860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.578 [2024-07-15 12:54:21.289866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.578 [2024-07-15 12:54:21.290196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.578 [2024-07-15 12:54:21.290205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.578 [2024-07-15 12:54:21.290209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.290214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.578 [2024-07-15 12:54:21.290220] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:29.578 [2024-07-15 12:54:21.290226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.290236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.290244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:29.578 [2024-07-15 12:54:21.290251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.290263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.578 [2024-07-15 12:54:21.290268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.578 [2024-07-15 12:54:21.290276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.579 [2024-07-15 12:54:21.290290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.290448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.290456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.290460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.290465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.290539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.290552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.290561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.290566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.290574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.290587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.290747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.290758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.290762] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.290767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=4096, cccid=4 00:24:29.579 [2024-07-15 12:54:21.290772] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x886440) on tqpair(0x802ec0): expected_datao=0, payload_size=4096 00:24:29.579 [2024-07-15 12:54:21.290778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.290792] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.290797] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.335280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.335284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.335302] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:29.579 [2024-07-15 12:54:21.335315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.335327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.335337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.335351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.335368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.335615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.335624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.335628] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335632] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=4096, cccid=4 00:24:29.579 [2024-07-15 12:54:21.335638] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x886440) on tqpair(0x802ec0): expected_datao=0, payload_size=4096 00:24:29.579 [2024-07-15 12:54:21.335644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.335677] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.381276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.381281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.381303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.381316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.381327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.381342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.381364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.381586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.381596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.381600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381605] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=4096, cccid=4 00:24:29.579 [2024-07-15 12:54:21.381610] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x886440) on tqpair(0x802ec0): expected_datao=0, payload_size=4096 00:24:29.579 [2024-07-15 12:54:21.381616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381666] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.381672] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.423456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.423460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.423476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423526] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:29.579 [2024-07-15 12:54:21.423532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:29.579 [2024-07-15 12:54:21.423538] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:29.579 [2024-07-15 12:54:21.423555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.423570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.423578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.423596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.579 [2024-07-15 12:54:21.423615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.423622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8865c0, cid 5, qid 0 00:24:29.579 [2024-07-15 12:54:21.423847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.423855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.423860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.423872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.423880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.423884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8865c0) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.423901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.423906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.423914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.423928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8865c0, cid 5, qid 0 00:24:29.579 [2024-07-15 12:54:21.424088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.424096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.424100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.424105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8865c0) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.424116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.424121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.424129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.424143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8865c0, cid 5, qid 0 00:24:29.579 [2024-07-15 12:54:21.428265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.428275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.428280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8865c0) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.428296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.428310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.428325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8865c0, cid 5, qid 0 00:24:29.579 [2024-07-15 12:54:21.428543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.579 [2024-07-15 12:54:21.428552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.579 [2024-07-15 12:54:21.428556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8865c0) on tqpair=0x802ec0 00:24:29.579 [2024-07-15 12:54:21.428580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.428594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.428603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.428619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.428627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.428640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.428649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.428654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x802ec0) 00:24:29.579 [2024-07-15 12:54:21.428662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.579 [2024-07-15 12:54:21.428677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8865c0, cid 5, qid 0 00:24:29.579 [2024-07-15 12:54:21.428684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886440, cid 4, qid 0 00:24:29.579 [2024-07-15 12:54:21.428690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x886740, cid 6, qid 0 00:24:29.579 [2024-07-15 12:54:21.428696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8868c0, cid 7, qid 0 00:24:29.579 [2024-07-15 12:54:21.429109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.429117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.429121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429126] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=8192, cccid=5 00:24:29.579 [2024-07-15 12:54:21.429132] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8865c0) on tqpair(0x802ec0): expected_datao=0, payload_size=8192 00:24:29.579 [2024-07-15 12:54:21.429137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429170] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429176] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.429190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.429194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429198] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=512, cccid=4 00:24:29.579 [2024-07-15 12:54:21.429204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x886440) on tqpair(0x802ec0): expected_datao=0, payload_size=512 00:24:29.579 [2024-07-15 12:54:21.429210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429222] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.429236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.429241] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429245] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=512, cccid=6 00:24:29.579 [2024-07-15 12:54:21.429251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x886740) on tqpair(0x802ec0): expected_datao=0, payload_size=512 00:24:29.579 [2024-07-15 12:54:21.429262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429270] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429277] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:29.579 [2024-07-15 12:54:21.429291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:29.579 [2024-07-15 12:54:21.429296] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429300] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x802ec0): datao=0, datal=4096, cccid=7 00:24:29.579 [2024-07-15 12:54:21.429305] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8868c0) on tqpair(0x802ec0): expected_datao=0, payload_size=4096 00:24:29.579 [2024-07-15 12:54:21.429311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429319] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429323] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:29.579 [2024-07-15 12:54:21.429334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.580 [2024-07-15 12:54:21.429341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.580 [2024-07-15 12:54:21.429345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8865c0) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.580 [2024-07-15 12:54:21.429372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.580 [2024-07-15 12:54:21.429377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886440) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.580 [2024-07-15 12:54:21.429401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.580 [2024-07-15 12:54:21.429405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886740) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.580 [2024-07-15 12:54:21.429426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.580 [2024-07-15 12:54:21.429430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8868c0) on tqpair=0x802ec0 00:24:29.580 ===================================================== 00:24:29.580 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.580 ===================================================== 00:24:29.580 Controller Capabilities/Features 00:24:29.580 ================================ 00:24:29.580 Vendor ID: 8086 00:24:29.580 Subsystem Vendor ID: 8086 00:24:29.580 Serial Number: SPDK00000000000001 00:24:29.580 Model Number: SPDK bdev Controller 00:24:29.580 Firmware Version: 24.09 00:24:29.580 Recommended Arb Burst: 6 00:24:29.580 IEEE OUI Identifier: e4 d2 5c 00:24:29.580 Multi-path I/O 00:24:29.580 May have multiple subsystem ports: Yes 00:24:29.580 May have multiple controllers: Yes 00:24:29.580 Associated with SR-IOV VF: No 00:24:29.580 Max Data Transfer Size: 131072 00:24:29.580 Max Number of Namespaces: 32 00:24:29.580 Max Number of I/O Queues: 127 00:24:29.580 NVMe Specification Version (VS): 1.3 00:24:29.580 NVMe Specification Version (Identify): 1.3 00:24:29.580 Maximum Queue Entries: 128 00:24:29.580 Contiguous Queues Required: Yes 00:24:29.580 Arbitration Mechanisms Supported 00:24:29.580 Weighted Round Robin: Not Supported 00:24:29.580 Vendor Specific: Not Supported 00:24:29.580 Reset Timeout: 15000 ms 00:24:29.580 Doorbell Stride: 4 bytes 00:24:29.580 NVM Subsystem Reset: Not Supported 00:24:29.580 Command Sets Supported 00:24:29.580 NVM Command Set: Supported 00:24:29.580 Boot Partition: Not Supported 00:24:29.580 Memory Page Size Minimum: 4096 bytes 00:24:29.580 Memory Page Size Maximum: 4096 bytes 00:24:29.580 Persistent Memory Region: Not Supported 00:24:29.580 Optional Asynchronous Events Supported 00:24:29.580 Namespace Attribute Notices: Supported 00:24:29.580 Firmware Activation Notices: Not Supported 00:24:29.580 ANA Change Notices: Not Supported 00:24:29.580 PLE Aggregate Log Change Notices: Not Supported 00:24:29.580 LBA Status Info Alert Notices: Not Supported 00:24:29.580 EGE Aggregate Log Change Notices: Not Supported 00:24:29.580 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.580 Zone Descriptor Change Notices: Not Supported 00:24:29.580 Discovery Log Change Notices: Not Supported 00:24:29.580 Controller Attributes 00:24:29.580 128-bit Host Identifier: Supported 00:24:29.580 Non-Operational Permissive Mode: Not Supported 00:24:29.580 NVM Sets: Not Supported 00:24:29.580 Read Recovery Levels: Not Supported 00:24:29.580 Endurance Groups: Not Supported 00:24:29.580 Predictable Latency Mode: Not Supported 00:24:29.580 Traffic Based Keep ALive: Not Supported 00:24:29.580 Namespace Granularity: Not Supported 00:24:29.580 SQ Associations: Not Supported 00:24:29.580 UUID List: Not Supported 00:24:29.580 Multi-Domain Subsystem: Not Supported 00:24:29.580 Fixed Capacity Management: Not Supported 00:24:29.580 Variable Capacity Management: Not Supported 00:24:29.580 Delete Endurance Group: Not Supported 00:24:29.580 Delete NVM Set: Not Supported 00:24:29.580 Extended LBA Formats Supported: Not Supported 00:24:29.580 Flexible Data Placement Supported: Not Supported 00:24:29.580 00:24:29.580 Controller Memory Buffer Support 00:24:29.580 ================================ 00:24:29.580 Supported: No 00:24:29.580 00:24:29.580 Persistent Memory Region Support 00:24:29.580 ================================ 00:24:29.580 Supported: No 00:24:29.580 00:24:29.580 Admin Command Set Attributes 00:24:29.580 ============================ 00:24:29.580 Security Send/Receive: Not Supported 00:24:29.580 Format NVM: Not Supported 00:24:29.580 Firmware Activate/Download: Not Supported 00:24:29.580 Namespace Management: Not Supported 00:24:29.580 Device Self-Test: Not Supported 00:24:29.580 Directives: Not Supported 00:24:29.580 NVMe-MI: Not Supported 00:24:29.580 Virtualization Management: Not Supported 00:24:29.580 Doorbell Buffer Config: Not Supported 00:24:29.580 Get LBA Status Capability: Not Supported 00:24:29.580 Command & Feature Lockdown Capability: Not Supported 00:24:29.580 Abort Command Limit: 4 00:24:29.580 Async Event Request Limit: 4 00:24:29.580 Number of Firmware Slots: N/A 00:24:29.580 Firmware Slot 1 Read-Only: N/A 00:24:29.580 Firmware Activation Without Reset: N/A 00:24:29.580 Multiple Update Detection Support: N/A 00:24:29.580 Firmware Update Granularity: No Information Provided 00:24:29.580 Per-Namespace SMART Log: No 00:24:29.580 Asymmetric Namespace Access Log Page: Not Supported 00:24:29.580 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:29.580 Command Effects Log Page: Supported 00:24:29.580 Get Log Page Extended Data: Supported 00:24:29.580 Telemetry Log Pages: Not Supported 00:24:29.580 Persistent Event Log Pages: Not Supported 00:24:29.580 Supported Log Pages Log Page: May Support 00:24:29.580 Commands Supported & Effects Log Page: Not Supported 00:24:29.580 Feature Identifiers & Effects Log Page:May Support 00:24:29.580 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.580 Data Area 4 for Telemetry Log: Not Supported 00:24:29.580 Error Log Page Entries Supported: 128 00:24:29.580 Keep Alive: Supported 00:24:29.580 Keep Alive Granularity: 10000 ms 00:24:29.580 00:24:29.580 NVM Command Set Attributes 00:24:29.580 ========================== 00:24:29.580 Submission Queue Entry Size 00:24:29.580 Max: 64 00:24:29.580 Min: 64 00:24:29.580 Completion Queue Entry Size 00:24:29.580 Max: 16 00:24:29.580 Min: 16 00:24:29.580 Number of Namespaces: 32 00:24:29.580 Compare Command: Supported 00:24:29.580 Write Uncorrectable Command: Not Supported 00:24:29.580 Dataset Management Command: Supported 00:24:29.580 Write Zeroes Command: Supported 00:24:29.580 Set Features Save Field: Not Supported 00:24:29.580 Reservations: Supported 00:24:29.580 Timestamp: Not Supported 00:24:29.580 Copy: Supported 00:24:29.580 Volatile Write Cache: Present 00:24:29.580 Atomic Write Unit (Normal): 1 00:24:29.580 Atomic Write Unit (PFail): 1 00:24:29.580 Atomic Compare & Write Unit: 1 00:24:29.580 Fused Compare & Write: Supported 00:24:29.580 Scatter-Gather List 00:24:29.580 SGL Command Set: Supported 00:24:29.580 SGL Keyed: Supported 00:24:29.580 SGL Bit Bucket Descriptor: Not Supported 00:24:29.580 SGL Metadata Pointer: Not Supported 00:24:29.580 Oversized SGL: Not Supported 00:24:29.580 SGL Metadata Address: Not Supported 00:24:29.580 SGL Offset: Supported 00:24:29.580 Transport SGL Data Block: Not Supported 00:24:29.580 Replay Protected Memory Block: Not Supported 00:24:29.580 00:24:29.580 Firmware Slot Information 00:24:29.580 ========================= 00:24:29.580 Active slot: 1 00:24:29.580 Slot 1 Firmware Revision: 24.09 00:24:29.580 00:24:29.580 00:24:29.580 Commands Supported and Effects 00:24:29.580 ============================== 00:24:29.580 Admin Commands 00:24:29.580 -------------- 00:24:29.580 Get Log Page (02h): Supported 00:24:29.580 Identify (06h): Supported 00:24:29.580 Abort (08h): Supported 00:24:29.580 Set Features (09h): Supported 00:24:29.580 Get Features (0Ah): Supported 00:24:29.580 Asynchronous Event Request (0Ch): Supported 00:24:29.580 Keep Alive (18h): Supported 00:24:29.580 I/O Commands 00:24:29.580 ------------ 00:24:29.580 Flush (00h): Supported LBA-Change 00:24:29.580 Write (01h): Supported LBA-Change 00:24:29.580 Read (02h): Supported 00:24:29.580 Compare (05h): Supported 00:24:29.580 Write Zeroes (08h): Supported LBA-Change 00:24:29.580 Dataset Management (09h): Supported LBA-Change 00:24:29.580 Copy (19h): Supported LBA-Change 00:24:29.580 00:24:29.580 Error Log 00:24:29.580 ========= 00:24:29.580 00:24:29.580 Arbitration 00:24:29.580 =========== 00:24:29.580 Arbitration Burst: 1 00:24:29.580 00:24:29.580 Power Management 00:24:29.580 ================ 00:24:29.580 Number of Power States: 1 00:24:29.580 Current Power State: Power State #0 00:24:29.580 Power State #0: 00:24:29.580 Max Power: 0.00 W 00:24:29.580 Non-Operational State: Operational 00:24:29.580 Entry Latency: Not Reported 00:24:29.580 Exit Latency: Not Reported 00:24:29.580 Relative Read Throughput: 0 00:24:29.580 Relative Read Latency: 0 00:24:29.580 Relative Write Throughput: 0 00:24:29.580 Relative Write Latency: 0 00:24:29.580 Idle Power: Not Reported 00:24:29.580 Active Power: Not Reported 00:24:29.580 Non-Operational Permissive Mode: Not Supported 00:24:29.580 00:24:29.580 Health Information 00:24:29.580 ================== 00:24:29.580 Critical Warnings: 00:24:29.580 Available Spare Space: OK 00:24:29.580 Temperature: OK 00:24:29.580 Device Reliability: OK 00:24:29.580 Read Only: No 00:24:29.580 Volatile Memory Backup: OK 00:24:29.580 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:29.580 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:29.580 Available Spare: 0% 00:24:29.580 Available Spare Threshold: 0% 00:24:29.580 Life Percentage Used:[2024-07-15 12:54:21.429553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x802ec0) 00:24:29.580 [2024-07-15 12:54:21.429568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.580 [2024-07-15 12:54:21.429584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8868c0, cid 7, qid 0 00:24:29.580 [2024-07-15 12:54:21.429755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.580 [2024-07-15 12:54:21.429763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.580 [2024-07-15 12:54:21.429768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.580 [2024-07-15 12:54:21.429772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8868c0) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429811] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:29.580 [2024-07-15 12:54:21.429824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885e40) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.580 [2024-07-15 12:54:21.429839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x885fc0) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.580 [2024-07-15 12:54:21.429853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x886140) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.580 [2024-07-15 12:54:21.429865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.580 [2024-07-15 12:54:21.429871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.580 [2024-07-15 12:54:21.429881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.429886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.429891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.429899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.429915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.430066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.430074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.430078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.430091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.430109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.430127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.430304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.430313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.430318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.430328] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:29.581 [2024-07-15 12:54:21.430334] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:29.581 [2024-07-15 12:54:21.430346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.430365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.430378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.430521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.430530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.430534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.430551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.430571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.430584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.430741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.430750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.430754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.430770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.430789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.430802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.430958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.430967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.430972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.430988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.430998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.431006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.431020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.431158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.431166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.431170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.431187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.431205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.431218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.431377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.431386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.431390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.431406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.431427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.431441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.431595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.431603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.431608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.431624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.431642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.431655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.431785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.431793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.431798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.431814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.431824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.431832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.431845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.431983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.431991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.431996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.432012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.432030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.432043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.432181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.432189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.432194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.432210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.432220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.432229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.432244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.436267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.436279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.436284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.436288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.436301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.436307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.436311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x802ec0) 00:24:29.581 [2024-07-15 12:54:21.436320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.581 [2024-07-15 12:54:21.436335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8862c0, cid 3, qid 0 00:24:29.581 [2024-07-15 12:54:21.436550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:29.581 [2024-07-15 12:54:21.436559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:29.581 [2024-07-15 12:54:21.436563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:29.581 [2024-07-15 12:54:21.436568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8862c0) on tqpair=0x802ec0 00:24:29.581 [2024-07-15 12:54:21.436577] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:29.581 0% 00:24:29.581 Data Units Read: 0 00:24:29.581 Data Units Written: 0 00:24:29.581 Host Read Commands: 0 00:24:29.581 Host Write Commands: 0 00:24:29.581 Controller Busy Time: 0 minutes 00:24:29.581 Power Cycles: 0 00:24:29.581 Power On Hours: 0 hours 00:24:29.581 Unsafe Shutdowns: 0 00:24:29.581 Unrecoverable Media Errors: 0 00:24:29.581 Lifetime Error Log Entries: 0 00:24:29.581 Warning Temperature Time: 0 minutes 00:24:29.581 Critical Temperature Time: 0 minutes 00:24:29.581 00:24:29.581 Number of Queues 00:24:29.581 ================ 00:24:29.581 Number of I/O Submission Queues: 127 00:24:29.581 Number of I/O Completion Queues: 127 00:24:29.581 00:24:29.581 Active Namespaces 00:24:29.581 ================= 00:24:29.581 Namespace ID:1 00:24:29.581 Error Recovery Timeout: Unlimited 00:24:29.581 Command Set Identifier: NVM (00h) 00:24:29.581 Deallocate: Supported 00:24:29.581 Deallocated/Unwritten Error: Not Supported 00:24:29.581 Deallocated Read Value: Unknown 00:24:29.581 Deallocate in Write Zeroes: Not Supported 00:24:29.581 Deallocated Guard Field: 0xFFFF 00:24:29.581 Flush: Supported 00:24:29.581 Reservation: Supported 00:24:29.581 Namespace Sharing Capabilities: Multiple Controllers 00:24:29.581 Size (in LBAs): 131072 (0GiB) 00:24:29.581 Capacity (in LBAs): 131072 (0GiB) 00:24:29.581 Utilization (in LBAs): 131072 (0GiB) 00:24:29.581 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:29.581 EUI64: ABCDEF0123456789 00:24:29.581 UUID: 227762df-c1c3-4c50-a439-8259f98ca0dc 00:24:29.581 Thin Provisioning: Not Supported 00:24:29.581 Per-NS Atomic Units: Yes 00:24:29.581 Atomic Boundary Size (Normal): 0 00:24:29.581 Atomic Boundary Size (PFail): 0 00:24:29.581 Atomic Boundary Offset: 0 00:24:29.581 Maximum Single Source Range Length: 65535 00:24:29.581 Maximum Copy Length: 65535 00:24:29.581 Maximum Source Range Count: 1 00:24:29.581 NGUID/EUI64 Never Reused: No 00:24:29.581 Namespace Write Protected: No 00:24:29.581 Number of LBA Formats: 1 00:24:29.581 Current LBA Format: LBA Format #00 00:24:29.581 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:29.581 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:29.581 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.582 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.582 rmmod nvme_tcp 00:24:29.582 rmmod nvme_fabrics 00:24:29.582 rmmod nvme_keyring 00:24:29.838 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4030723 ']' 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4030723 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 4030723 ']' 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 4030723 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4030723 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4030723' 00:24:29.839 killing process with pid 4030723 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 4030723 00:24:29.839 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 4030723 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.096 12:54:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.049 12:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.049 00:24:32.049 real 0m10.029s 00:24:32.049 user 0m8.641s 00:24:32.049 sys 0m4.900s 00:24:32.049 12:54:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.049 12:54:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.049 ************************************ 00:24:32.049 END TEST nvmf_identify 00:24:32.049 ************************************ 00:24:32.049 12:54:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:32.049 12:54:23 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:32.049 12:54:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:32.049 12:54:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.049 12:54:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:32.049 ************************************ 00:24:32.049 START TEST nvmf_perf 00:24:32.049 ************************************ 00:24:32.049 12:54:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:32.308 * Looking for test storage... 00:24:32.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.308 12:54:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:37.579 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:37.579 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:37.579 Found net devices under 0000:af:00.0: cvl_0_0 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:37.579 Found net devices under 0000:af:00.1: cvl_0_1 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.579 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.580 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:24:37.839 00:24:37.839 --- 10.0.0.2 ping statistics --- 00:24:37.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.839 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:24:37.839 00:24:37.839 --- 10.0.0.1 ping statistics --- 00:24:37.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.839 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4034717 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4034717 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 4034717 ']' 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.839 12:54:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.098 [2024-07-15 12:54:29.803141] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:24:38.098 [2024-07-15 12:54:29.803182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.098 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.098 [2024-07-15 12:54:29.881826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.098 [2024-07-15 12:54:30.004236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.098 [2024-07-15 12:54:30.004295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.098 [2024-07-15 12:54:30.004312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.098 [2024-07-15 12:54:30.004325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.098 [2024-07-15 12:54:30.004337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.098 [2024-07-15 12:54:30.004399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.098 [2024-07-15 12:54:30.004538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.098 [2024-07-15 12:54:30.004648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.098 [2024-07-15 12:54:30.004651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:39.035 12:54:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:42.323 12:54:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:42.323 12:54:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:42.323 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:24:42.323 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:42.582 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:42.582 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:24:42.582 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:42.583 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:42.583 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:42.583 [2024-07-15 12:54:34.456022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.583 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.841 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.841 12:54:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.100 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:43.100 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:43.359 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.618 [2024-07-15 12:54:35.489527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.618 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:43.876 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:24:43.876 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:43.876 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:43.876 12:54:35 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:45.254 Initializing NVMe Controllers 00:24:45.254 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:24:45.254 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:24:45.254 Initialization complete. Launching workers. 00:24:45.254 ======================================================== 00:24:45.254 Latency(us) 00:24:45.254 Device Information : IOPS MiB/s Average min max 00:24:45.254 PCIE (0000:86:00.0) NSID 1 from core 0: 69504.00 271.50 459.82 47.72 4377.16 00:24:45.254 ======================================================== 00:24:45.254 Total : 69504.00 271.50 459.82 47.72 4377.16 00:24:45.254 00:24:45.254 12:54:36 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.254 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.632 Initializing NVMe Controllers 00:24:46.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:46.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:46.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:46.632 Initialization complete. Launching workers. 00:24:46.632 ======================================================== 00:24:46.632 Latency(us) 00:24:46.632 Device Information : IOPS MiB/s Average min max 00:24:46.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.63 0.41 9645.16 235.34 45103.43 00:24:46.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.79 0.23 17256.98 6828.34 47887.26 00:24:46.632 ======================================================== 00:24:46.632 Total : 164.42 0.64 12413.09 235.34 47887.26 00:24:46.632 00:24:46.632 12:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.632 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.008 Initializing NVMe Controllers 00:24:48.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.009 Initialization complete. Launching workers. 00:24:48.009 ======================================================== 00:24:48.009 Latency(us) 00:24:48.009 Device Information : IOPS MiB/s Average min max 00:24:48.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4326.40 16.90 7395.42 724.81 12733.99 00:24:48.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3790.22 14.81 8455.08 6714.23 20016.56 00:24:48.009 ======================================================== 00:24:48.009 Total : 8116.61 31.71 7890.25 724.81 20016.56 00:24:48.009 00:24:48.009 12:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:48.009 12:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:48.009 12:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.009 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.543 Initializing NVMe Controllers 00:24:50.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.543 Controller IO queue size 128, less than required. 00:24:50.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.543 Controller IO queue size 128, less than required. 00:24:50.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:50.543 Initialization complete. Launching workers. 00:24:50.543 ======================================================== 00:24:50.543 Latency(us) 00:24:50.543 Device Information : IOPS MiB/s Average min max 00:24:50.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1264.74 316.18 104320.73 53787.16 164674.84 00:24:50.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 562.66 140.67 231626.79 65221.12 366138.03 00:24:50.543 ======================================================== 00:24:50.544 Total : 1827.40 456.85 143518.60 53787.16 366138.03 00:24:50.544 00:24:50.544 12:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:50.544 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.802 No valid NVMe controllers or AIO or URING devices found 00:24:50.802 Initializing NVMe Controllers 00:24:50.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.802 Controller IO queue size 128, less than required. 00:24:50.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:50.802 Controller IO queue size 128, less than required. 00:24:50.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:50.802 WARNING: Some requested NVMe devices were skipped 00:24:50.802 12:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:50.802 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.336 Initializing NVMe Controllers 00:24:53.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.336 Controller IO queue size 128, less than required. 00:24:53.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.336 Controller IO queue size 128, less than required. 00:24:53.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:53.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:53.336 Initialization complete. Launching workers. 00:24:53.336 00:24:53.336 ==================== 00:24:53.336 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:53.336 TCP transport: 00:24:53.336 polls: 13445 00:24:53.336 idle_polls: 7401 00:24:53.336 sock_completions: 6044 00:24:53.336 nvme_completions: 5715 00:24:53.336 submitted_requests: 8642 00:24:53.336 queued_requests: 1 00:24:53.336 00:24:53.336 ==================== 00:24:53.336 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:53.336 TCP transport: 00:24:53.336 polls: 16292 00:24:53.336 idle_polls: 12790 00:24:53.336 sock_completions: 3502 00:24:53.336 nvme_completions: 4537 00:24:53.336 submitted_requests: 6796 00:24:53.336 queued_requests: 1 00:24:53.336 ======================================================== 00:24:53.336 Latency(us) 00:24:53.336 Device Information : IOPS MiB/s Average min max 00:24:53.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1428.41 357.10 92349.20 49851.32 144808.96 00:24:53.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1133.93 283.48 115079.20 40467.67 181949.63 00:24:53.336 ======================================================== 00:24:53.336 Total : 2562.34 640.58 102408.06 40467.67 181949.63 00:24:53.336 00:24:53.336 12:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:53.336 12:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.595 rmmod nvme_tcp 00:24:53.595 rmmod nvme_fabrics 00:24:53.595 rmmod nvme_keyring 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4034717 ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4034717 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 4034717 ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 4034717 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034717 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034717' 00:24:53.595 killing process with pid 4034717 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 4034717 00:24:53.595 12:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 4034717 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.499 12:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.400 12:54:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.400 00:24:57.400 real 0m25.147s 00:24:57.400 user 1m8.385s 00:24:57.400 sys 0m7.604s 00:24:57.400 12:54:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.400 12:54:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.400 ************************************ 00:24:57.400 END TEST nvmf_perf 00:24:57.400 ************************************ 00:24:57.400 12:54:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:57.400 12:54:49 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:57.400 12:54:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:57.400 12:54:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.400 12:54:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.400 ************************************ 00:24:57.400 START TEST nvmf_fio_host 00:24:57.400 ************************************ 00:24:57.400 12:54:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:57.400 * Looking for test storage... 00:24:57.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.400 12:54:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.400 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.400 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.400 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.401 12:54:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.970 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.970 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.971 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.971 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.971 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:25:03.971 00:25:03.971 --- 10.0.0.2 ping statistics --- 00:25:03.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.971 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:03.971 00:25:03.971 --- 10.0.0.1 ping statistics --- 00:25:03.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.971 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:03.971 12:54:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4041204 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4041204 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 4041204 ']' 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.971 12:54:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.971 [2024-07-15 12:54:55.115512] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:25:03.971 [2024-07-15 12:54:55.115626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.971 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.971 [2024-07-15 12:54:55.243427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.971 [2024-07-15 12:54:55.335723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.971 [2024-07-15 12:54:55.335768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.971 [2024-07-15 12:54:55.335778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.971 [2024-07-15 12:54:55.335787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.971 [2024-07-15 12:54:55.335794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.971 [2024-07-15 12:54:55.335846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.971 [2024-07-15 12:54:55.335957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.971 [2024-07-15 12:54:55.336069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.971 [2024-07-15 12:54:55.336069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.230 12:54:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.230 12:54:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:04.230 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:04.489 [2024-07-15 12:54:56.178782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.489 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:04.489 12:54:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.489 12:54:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.489 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:04.747 Malloc1 00:25:04.747 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.007 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:05.267 12:54:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.267 [2024-07-15 12:54:57.190478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.526 12:54:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:05.813 12:54:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:06.081 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:06.081 fio-3.35 00:25:06.081 Starting 1 thread 00:25:06.081 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.642 00:25:08.642 test: (groupid=0, jobs=1): err= 0: pid=4041953: Mon Jul 15 12:55:00 2024 00:25:08.642 read: IOPS=3730, BW=14.6MiB/s (15.3MB/s)(29.4MiB/2016msec) 00:25:08.642 slat (nsec): min=1345, max=209267, avg=1721.94, stdev=3489.95 00:25:08.642 clat (usec): min=5035, max=33923, avg=18445.93, stdev=1867.18 00:25:08.642 lat (usec): min=5066, max=33924, avg=18447.65, stdev=1866.80 00:25:08.642 clat percentiles (usec): 00:25:08.642 | 1.00th=[14484], 5.00th=[15795], 10.00th=[16450], 20.00th=[16909], 00:25:08.642 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:25:08.642 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20579], 95.00th=[21103], 00:25:08.642 | 99.00th=[22414], 99.50th=[22938], 99.90th=[32637], 99.95th=[33817], 00:25:08.642 | 99.99th=[33817] 00:25:08.642 bw ( KiB/s): min=14328, max=15504, per=100.00%, avg=14930.00, stdev=485.72, samples=4 00:25:08.642 iops : min= 3582, max= 3876, avg=3732.50, stdev=121.43, samples=4 00:25:08.642 write: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(29.6MiB/2016msec); 0 zone resets 00:25:08.642 slat (nsec): min=1427, max=181309, avg=1817.39, stdev=2528.10 00:25:08.642 clat (usec): min=2021, max=30599, avg=15525.47, stdev=1580.46 00:25:08.642 lat (usec): min=2033, max=30600, avg=15527.28, stdev=1580.07 00:25:08.642 clat percentiles (usec): 00:25:08.642 | 1.00th=[12256], 5.00th=[13435], 10.00th=[14091], 20.00th=[14484], 00:25:08.642 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:25:08.642 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:25:08.642 | 99.00th=[18482], 99.50th=[23200], 99.90th=[27657], 99.95th=[30540], 00:25:08.642 | 99.99th=[30540] 00:25:08.642 bw ( KiB/s): min=14720, max=15232, per=99.84%, avg=15016.00, stdev=214.17, samples=4 00:25:08.642 iops : min= 3680, max= 3808, avg=3754.00, stdev=53.54, samples=4 00:25:08.642 lat (msec) : 4=0.07%, 10=0.28%, 20=90.25%, 50=9.40% 00:25:08.642 cpu : usr=68.39%, sys=31.07%, ctx=57, majf=0, minf=6 00:25:08.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:08.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:08.642 issued rwts: total=7520,7580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:08.642 00:25:08.642 Run status group 0 (all jobs): 00:25:08.642 READ: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=29.4MiB (30.8MB), run=2016-2016msec 00:25:08.642 WRITE: bw=14.7MiB/s (15.4MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=29.6MiB (31.0MB), run=2016-2016msec 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:08.642 12:55:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:08.901 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:08.901 fio-3.35 00:25:08.901 Starting 1 thread 00:25:08.901 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.433 00:25:11.433 test: (groupid=0, jobs=1): err= 0: pid=4042533: Mon Jul 15 12:55:03 2024 00:25:11.433 read: IOPS=4589, BW=71.7MiB/s (75.2MB/s)(144MiB/2007msec) 00:25:11.433 slat (usec): min=3, max=125, avg= 4.20, stdev= 1.57 00:25:11.433 clat (usec): min=4207, max=36415, avg=15945.15, stdev=5373.64 00:25:11.433 lat (usec): min=4211, max=36419, avg=15949.35, stdev=5373.66 00:25:11.433 clat percentiles (usec): 00:25:11.433 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[10421], 00:25:11.433 | 30.00th=[12780], 40.00th=[15139], 50.00th=[16712], 60.00th=[17957], 00:25:11.433 | 70.00th=[19006], 80.00th=[20055], 90.00th=[22152], 95.00th=[24511], 00:25:11.433 | 99.00th=[29492], 99.50th=[30016], 99.90th=[32637], 99.95th=[33817], 00:25:11.433 | 99.99th=[36439] 00:25:11.433 bw ( KiB/s): min=28288, max=56032, per=52.17%, avg=38311.00, stdev=12189.30, samples=4 00:25:11.433 iops : min= 1768, max= 3502, avg=2394.25, stdev=761.93, samples=4 00:25:11.433 write: IOPS=2760, BW=43.1MiB/s (45.2MB/s)(78.6MiB/1822msec); 0 zone resets 00:25:11.433 slat (usec): min=45, max=381, avg=47.22, stdev= 7.90 00:25:11.433 clat (usec): min=4940, max=42205, avg=20777.19, stdev=7329.85 00:25:11.433 lat (usec): min=4986, max=42251, avg=20824.41, stdev=7329.62 00:25:11.433 clat percentiles (usec): 00:25:11.433 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11994], 00:25:11.433 | 30.00th=[13960], 40.00th=[20055], 50.00th=[23462], 60.00th=[24511], 00:25:11.433 | 70.00th=[26084], 80.00th=[27395], 90.00th=[28967], 95.00th=[30540], 00:25:11.433 | 99.00th=[33424], 99.50th=[34866], 99.90th=[41681], 99.95th=[41681], 00:25:11.433 | 99.99th=[42206] 00:25:11.433 bw ( KiB/s): min=28992, max=57824, per=90.08%, avg=39782.50, stdev=12545.93, samples=4 00:25:11.433 iops : min= 1812, max= 3614, avg=2486.25, stdev=784.20, samples=4 00:25:11.433 lat (msec) : 10=13.92%, 20=51.51%, 50=34.57% 00:25:11.433 cpu : usr=82.30%, sys=16.75%, ctx=34, majf=0, minf=3 00:25:11.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:11.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:11.433 issued rwts: total=9212,5029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:11.433 00:25:11.433 Run status group 0 (all jobs): 00:25:11.433 READ: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=144MiB (151MB), run=2007-2007msec 00:25:11.433 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=78.6MiB (82.4MB), run=1822-1822msec 00:25:11.433 12:55:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.692 rmmod nvme_tcp 00:25:11.692 rmmod nvme_fabrics 00:25:11.692 rmmod nvme_keyring 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4041204 ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4041204 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 4041204 ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 4041204 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041204 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041204' 00:25:11.692 killing process with pid 4041204 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 4041204 00:25:11.692 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 4041204 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.950 12:55:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.854 12:55:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.115 00:25:14.115 real 0m16.622s 00:25:14.115 user 1m2.427s 00:25:14.115 sys 0m6.492s 00:25:14.115 12:55:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.115 12:55:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.115 ************************************ 00:25:14.115 END TEST nvmf_fio_host 00:25:14.115 ************************************ 00:25:14.115 12:55:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:14.115 12:55:05 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:14.115 12:55:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:14.115 12:55:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.115 12:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.115 ************************************ 00:25:14.115 START TEST nvmf_failover 00:25:14.115 ************************************ 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:14.115 * Looking for test storage... 00:25:14.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.115 12:55:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.115 12:55:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.115 12:55:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.115 12:55:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.115 12:55:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:20.792 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:20.792 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:20.792 Found net devices under 0000:af:00.0: cvl_0_0 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:20.792 Found net devices under 0000:af:00.1: cvl_0_1 00:25:20.792 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:25:20.793 00:25:20.793 --- 10.0.0.2 ping statistics --- 00:25:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.793 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:25:20.793 00:25:20.793 --- 10.0.0.1 ping statistics --- 00:25:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.793 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4046626 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4046626 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4046626 ']' 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.793 12:55:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.793 [2024-07-15 12:55:11.834354] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:25:20.793 [2024-07-15 12:55:11.834409] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.793 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.793 [2024-07-15 12:55:11.919547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.793 [2024-07-15 12:55:12.021235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.793 [2024-07-15 12:55:12.021285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.793 [2024-07-15 12:55:12.021298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.793 [2024-07-15 12:55:12.021309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.793 [2024-07-15 12:55:12.021319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.793 [2024-07-15 12:55:12.021440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.793 [2024-07-15 12:55:12.021555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.793 [2024-07-15 12:55:12.021557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 12:55:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.052 12:55:12 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.310 [2024-07-15 12:55:13.048041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.310 12:55:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.568 Malloc0 00:25:21.568 12:55:13 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.826 12:55:13 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.084 12:55:13 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.084 [2024-07-15 12:55:13.954614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.084 12:55:13 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.342 [2024-07-15 12:55:14.123244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.342 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:22.601 [2024-07-15 12:55:14.376336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4046995 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4046995 /var/tmp/bdevperf.sock 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4046995 ']' 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.601 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:22.859 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.859 12:55:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:22.859 12:55:14 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.427 NVMe0n1 00:25:23.427 12:55:15 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.686 00:25:23.686 12:55:15 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4047250 00:25:23.686 12:55:15 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:23.686 12:55:15 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.622 12:55:16 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.881 [2024-07-15 12:55:16.601831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.601919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.601942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.601961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.601980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.881 [2024-07-15 12:55:16.602986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 [2024-07-15 12:55:16.603921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12482a0 is same with the state(5) to be set 00:25:24.882 12:55:16 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:28.167 12:55:19 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.167 00:25:28.167 12:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:28.425 [2024-07-15 12:55:20.254952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249180 is same with the state(5) to be set 00:25:28.425 12:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:31.713 12:55:23 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.713 [2024-07-15 12:55:23.520238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.713 12:55:23 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:32.648 12:55:24 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.907 12:55:24 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4047250 00:25:39.497 0 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4046995 ']' 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4046995' 00:25:39.497 killing process with pid 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4046995 00:25:39.497 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.497 [2024-07-15 12:55:14.452267] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:25:39.497 [2024-07-15 12:55:14.452330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046995 ] 00:25:39.497 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.497 [2024-07-15 12:55:14.533166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.497 [2024-07-15 12:55:14.625337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.497 Running I/O for 15 seconds... 00:25:39.497 [2024-07-15 12:55:16.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.604980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.604989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.497 [2024-07-15 12:55:16.605736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.497 [2024-07-15 12:55:16.605759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.497 [2024-07-15 12:55:16.605921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.497 [2024-07-15 12:55:16.605930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.605942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.605951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.605963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.605973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.605984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.605993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.606015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.606038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.606059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.498 [2024-07-15 12:55:16.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.606981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.606990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.498 [2024-07-15 12:55:16.607115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33208 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33216 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33224 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33232 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33240 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33248 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.498 [2024-07-15 12:55:16.607352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.498 [2024-07-15 12:55:16.607360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.498 [2024-07-15 12:55:16.607367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33256 len:8 PRP1 0x0 PRP2 0x0 00:25:39.498 [2024-07-15 12:55:16.607377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33264 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33272 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33280 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33288 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33296 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33304 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.607601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.607608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.607616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33312 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.607625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.617194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33320 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.617220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.499 [2024-07-15 12:55:16.617243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.499 [2024-07-15 12:55:16.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33328 len:8 PRP1 0x0 PRP2 0x0 00:25:39.499 [2024-07-15 12:55:16.617277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617331] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1156150 was disconnected and freed. reset controller. 00:25:39.499 [2024-07-15 12:55:16.617347] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.499 [2024-07-15 12:55:16.617382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.499 [2024-07-15 12:55:16.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.499 [2024-07-15 12:55:16.617424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.499 [2024-07-15 12:55:16.617451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.499 [2024-07-15 12:55:16.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:16.617494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.499 [2024-07-15 12:55:16.617536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1161a30 (9): Bad file descriptor 00:25:39.499 [2024-07-15 12:55:16.623380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.499 [2024-07-15 12:55:16.786501] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.499 [2024-07-15 12:55:20.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.499 [2024-07-15 12:55:20.255968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.255980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.255989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.256001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.256011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.256022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.256032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.256045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.256055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.256066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.256076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.499 [2024-07-15 12:55:20.256088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.499 [2024-07-15 12:55:20.256098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.500 [2024-07-15 12:55:20.256590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.256976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.256986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.500 [2024-07-15 12:55:20.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.500 [2024-07-15 12:55:20.257611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:20.257982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:20.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118e6f0 is same with the state(5) to be set 00:25:39.501 [2024-07-15 12:55:20.258026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.501 [2024-07-15 12:55:20.258035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.501 [2024-07-15 12:55:20.258044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112016 len:8 PRP1 0x0 PRP2 0x0 00:25:39.501 [2024-07-15 12:55:20.258054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258100] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x118e6f0 was disconnected and freed. reset controller. 00:25:39.501 [2024-07-15 12:55:20.258113] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:39.501 [2024-07-15 12:55:20.258139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.501 [2024-07-15 12:55:20.258150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.501 [2024-07-15 12:55:20.258173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.501 [2024-07-15 12:55:20.258193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.501 [2024-07-15 12:55:20.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:20.258222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.501 [2024-07-15 12:55:20.262477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.501 [2024-07-15 12:55:20.262510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1161a30 (9): Bad file descriptor 00:25:39.501 [2024-07-15 12:55:20.333401] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.501 [2024-07-15 12:55:24.785549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.785851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.785982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.501 [2024-07-15 12:55:24.786207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.501 [2024-07-15 12:55:24.786481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.501 [2024-07-15 12:55:24.786491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.502 [2024-07-15 12:55:24.786811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.502 [2024-07-15 12:55:24.786832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.502 [2024-07-15 12:55:24.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.502 [2024-07-15 12:55:24.786864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.502 [2024-07-15 12:55:24.786876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.786897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.786918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.786940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.786961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.786995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.787983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.787994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.788004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.788016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.503 [2024-07-15 12:55:24.788026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.788038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.788047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.788069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.788080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.503 [2024-07-15 12:55:24.788090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.503 [2024-07-15 12:55:24.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.504 [2024-07-15 12:55:24.788353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.504 [2024-07-15 12:55:24.788384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.504 [2024-07-15 12:55:24.788393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:25:39.504 [2024-07-15 12:55:24.788403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1191ee0 was disconnected and freed. reset controller. 00:25:39.504 [2024-07-15 12:55:24.788462] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:39.504 [2024-07-15 12:55:24.788486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.504 [2024-07-15 12:55:24.788497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.504 [2024-07-15 12:55:24.788517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.504 [2024-07-15 12:55:24.788538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.504 [2024-07-15 12:55:24.788558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.504 [2024-07-15 12:55:24.788568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.504 [2024-07-15 12:55:24.792824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.504 [2024-07-15 12:55:24.792864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1161a30 (9): Bad file descriptor 00:25:39.504 [2024-07-15 12:55:24.953292] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.504 00:25:39.504 Latency(us) 00:25:39.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.504 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.504 Verification LBA range: start 0x0 length 0x4000 00:25:39.504 NVMe0n1 : 15.03 4928.06 19.25 811.56 0.00 22264.44 629.29 38130.04 00:25:39.504 =================================================================================================================== 00:25:39.504 Total : 4928.06 19.25 811.56 0.00 22264.44 629.29 38130.04 00:25:39.504 Received shutdown signal, test time was about 15.000000 seconds 00:25:39.504 00:25:39.504 Latency(us) 00:25:39.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.504 =================================================================================================================== 00:25:39.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4049881 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4049881 /var/tmp/bdevperf.sock 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4049881 ']' 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.504 12:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.504 12:55:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.504 12:55:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:39.504 12:55:31 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.504 [2024-07-15 12:55:31.338613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.504 12:55:31 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:39.762 [2024-07-15 12:55:31.599619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:39.762 12:55:31 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.327 NVMe0n1 00:25:40.327 12:55:32 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.893 00:25:40.893 12:55:32 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.152 00:25:41.152 12:55:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.152 12:55:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:41.414 12:55:33 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.673 12:55:33 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:44.962 12:55:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.962 12:55:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:44.962 12:55:36 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4050923 00:25:44.962 12:55:36 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:44.962 12:55:36 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4050923 00:25:46.339 0 00:25:46.339 12:55:37 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:46.339 [2024-07-15 12:55:30.914841] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:25:46.339 [2024-07-15 12:55:30.914907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049881 ] 00:25:46.339 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.339 [2024-07-15 12:55:30.998533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.339 [2024-07-15 12:55:31.081914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.339 [2024-07-15 12:55:33.423192] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:46.339 [2024-07-15 12:55:33.423246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.339 [2024-07-15 12:55:33.423267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.339 [2024-07-15 12:55:33.423280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.339 [2024-07-15 12:55:33.423291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.339 [2024-07-15 12:55:33.423301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.339 [2024-07-15 12:55:33.423311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.339 [2024-07-15 12:55:33.423322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.339 [2024-07-15 12:55:33.423332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.339 [2024-07-15 12:55:33.423341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.339 [2024-07-15 12:55:33.423374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.339 [2024-07-15 12:55:33.423392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x556a30 (9): Bad file descriptor 00:25:46.339 [2024-07-15 12:55:33.484176] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:46.339 Running I/O for 1 seconds... 00:25:46.339 00:25:46.339 Latency(us) 00:25:46.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.339 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:46.339 Verification LBA range: start 0x0 length 0x4000 00:25:46.339 NVMe0n1 : 1.02 3751.35 14.65 0.00 0.00 33968.07 5034.36 29193.31 00:25:46.339 =================================================================================================================== 00:25:46.339 Total : 3751.35 14.65 0.00 0.00 33968.07 5034.36 29193.31 00:25:46.339 12:55:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.339 12:55:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:46.339 12:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.596 12:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.596 12:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:46.855 12:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:47.113 12:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:50.401 12:55:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.401 12:55:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4049881 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4049881 ']' 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4049881 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4049881 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4049881' 00:25:50.401 killing process with pid 4049881 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4049881 00:25:50.401 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4049881 00:25:50.660 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:50.660 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.918 rmmod nvme_tcp 00:25:50.918 rmmod nvme_fabrics 00:25:50.918 rmmod nvme_keyring 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4046626 ']' 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4046626 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4046626 ']' 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4046626 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4046626 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4046626' 00:25:50.918 killing process with pid 4046626 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4046626 00:25:50.918 12:55:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4046626 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.177 12:55:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.715 12:55:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.715 00:25:53.715 real 0m39.264s 00:25:53.715 user 2m6.898s 00:25:53.715 sys 0m7.673s 00:25:53.715 12:55:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.715 12:55:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:53.715 ************************************ 00:25:53.715 END TEST nvmf_failover 00:25:53.715 ************************************ 00:25:53.715 12:55:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:53.715 12:55:45 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.715 12:55:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:53.715 12:55:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.715 12:55:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.715 ************************************ 00:25:53.715 START TEST nvmf_host_discovery 00:25:53.715 ************************************ 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.715 * Looking for test storage... 00:25:53.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:53.715 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.716 12:55:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:58.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:58.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:58.984 Found net devices under 0000:af:00.0: cvl_0_0 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:58.984 Found net devices under 0000:af:00.1: cvl_0_1 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:58.984 00:25:58.984 --- 10.0.0.2 ping statistics --- 00:25:58.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.984 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:25:58.984 00:25:58.984 --- 10.0.0.1 ping statistics --- 00:25:58.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.984 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4055643 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4055643 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4055643 ']' 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.984 12:55:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.242 [2024-07-15 12:55:50.955985] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:25:59.242 [2024-07-15 12:55:50.956046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.242 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.242 [2024-07-15 12:55:51.041202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.242 [2024-07-15 12:55:51.144418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.242 [2024-07-15 12:55:51.144464] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.242 [2024-07-15 12:55:51.144477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.242 [2024-07-15 12:55:51.144488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.242 [2024-07-15 12:55:51.144498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.242 [2024-07-15 12:55:51.144523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 [2024-07-15 12:55:52.201132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 [2024-07-15 12:55:52.213320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 null0 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 null1 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4055831 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4055831 /tmp/host.sock 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4055831 ']' 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:00.620 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.620 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.620 [2024-07-15 12:55:52.333221] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:26:00.620 [2024-07-15 12:55:52.333348] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055831 ] 00:26:00.620 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.620 [2024-07-15 12:55:52.452644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.620 [2024-07-15 12:55:52.555789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.879 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:01.137 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.138 12:55:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.138 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.399 [2024-07-15 12:55:53.091703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:01.399 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:01.400 12:55:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:02.015 [2024-07-15 12:55:53.820358] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:02.015 [2024-07-15 12:55:53.820381] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:02.015 [2024-07-15 12:55:53.820398] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:02.015 [2024-07-15 12:55:53.906688] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:02.282 [2024-07-15 12:55:54.010377] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:02.282 [2024-07-15 12:55:54.010401] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.541 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.542 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 [2024-07-15 12:55:54.608251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:02.801 [2024-07-15 12:55:54.609308] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:02.801 [2024-07-15 12:55:54.609336] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.801 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.802 [2024-07-15 12:55:54.696619] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:02.802 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.060 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:03.060 12:55:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:03.060 [2024-07-15 12:55:55.000956] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:03.060 [2024-07-15 12:55:55.000978] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.060 [2024-07-15 12:55:55.000986] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 [2024-07-15 12:55:55.876797] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:03.996 [2024-07-15 12:55:55.876824] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:03.996 [2024-07-15 12:55:55.882435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.996 [2024-07-15 12:55:55.882459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.996 [2024-07-15 12:55:55.882471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.996 [2024-07-15 12:55:55.882481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.996 [2024-07-15 12:55:55.882492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.996 [2024-07-15 12:55:55.882502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.996 [2024-07-15 12:55:55.882512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.996 [2024-07-15 12:55:55.882522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.996 [2024-07-15 12:55:55.882532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.996 [2024-07-15 12:55:55.892447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:03.996 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.996 [2024-07-15 12:55:55.902488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.996 [2024-07-15 12:55:55.902809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.996 [2024-07-15 12:55:55.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:03.996 [2024-07-15 12:55:55.902839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:03.996 [2024-07-15 12:55:55.902856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:03.996 [2024-07-15 12:55:55.902870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.996 [2024-07-15 12:55:55.902880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.996 [2024-07-15 12:55:55.902891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.996 [2024-07-15 12:55:55.902916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.996 [2024-07-15 12:55:55.912552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.996 [2024-07-15 12:55:55.912848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.996 [2024-07-15 12:55:55.912865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:03.996 [2024-07-15 12:55:55.912876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:03.996 [2024-07-15 12:55:55.912891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:03.996 [2024-07-15 12:55:55.912905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.996 [2024-07-15 12:55:55.912914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.996 [2024-07-15 12:55:55.912923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.996 [2024-07-15 12:55:55.912937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.996 [2024-07-15 12:55:55.922614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.996 [2024-07-15 12:55:55.922892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.996 [2024-07-15 12:55:55.922911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:03.996 [2024-07-15 12:55:55.922921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:03.996 [2024-07-15 12:55:55.922938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:03.996 [2024-07-15 12:55:55.922952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.996 [2024-07-15 12:55:55.922961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.996 [2024-07-15 12:55:55.922971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.996 [2024-07-15 12:55:55.922985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.996 [2024-07-15 12:55:55.932677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.996 [2024-07-15 12:55:55.932848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.996 [2024-07-15 12:55:55.932865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:03.996 [2024-07-15 12:55:55.932876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:03.996 [2024-07-15 12:55:55.932890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:03.996 [2024-07-15 12:55:55.932903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.996 [2024-07-15 12:55:55.932913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.996 [2024-07-15 12:55:55.932923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.997 [2024-07-15 12:55:55.932937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.255 [2024-07-15 12:55:55.942743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.255 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.255 [2024-07-15 12:55:55.942956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.255 [2024-07-15 12:55:55.942975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:04.255 [2024-07-15 12:55:55.942985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:04.256 [2024-07-15 12:55:55.943001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:04.256 [2024-07-15 12:55:55.943015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.256 [2024-07-15 12:55:55.943025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.256 [2024-07-15 12:55:55.943036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.256 [2024-07-15 12:55:55.943049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.256 [2024-07-15 12:55:55.952807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.256 [2024-07-15 12:55:55.953021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.256 [2024-07-15 12:55:55.953040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f470 with addr=10.0.0.2, port=4420 00:26:04.256 [2024-07-15 12:55:55.953059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f470 is same with the state(5) to be set 00:26:04.256 [2024-07-15 12:55:55.953075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f470 (9): Bad file descriptor 00:26:04.256 [2024-07-15 12:55:55.953088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.256 [2024-07-15 12:55:55.953097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.256 [2024-07-15 12:55:55.953107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.256 [2024-07-15 12:55:55.953122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.256 [2024-07-15 12:55:55.962500] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:04.256 [2024-07-15 12:55:55.962521] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 12:55:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.256 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.257 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.516 12:55:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.453 [2024-07-15 12:55:57.319430] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:05.453 [2024-07-15 12:55:57.319450] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:05.453 [2024-07-15 12:55:57.319467] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.713 [2024-07-15 12:55:57.407762] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:05.713 [2024-07-15 12:55:57.514577] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:05.713 [2024-07-15 12:55:57.514611] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.713 request: 00:26:05.713 { 00:26:05.713 "name": "nvme", 00:26:05.713 "trtype": "tcp", 00:26:05.713 "traddr": "10.0.0.2", 00:26:05.713 "adrfam": "ipv4", 00:26:05.713 "trsvcid": "8009", 00:26:05.713 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:05.713 "wait_for_attach": true, 00:26:05.713 "method": "bdev_nvme_start_discovery", 00:26:05.713 "req_id": 1 00:26:05.713 } 00:26:05.713 Got JSON-RPC error response 00:26:05.713 response: 00:26:05.713 { 00:26:05.713 "code": -17, 00:26:05.713 "message": "File exists" 00:26:05.713 } 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.713 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.973 request: 00:26:05.973 { 00:26:05.973 "name": "nvme_second", 00:26:05.973 "trtype": "tcp", 00:26:05.973 "traddr": "10.0.0.2", 00:26:05.973 "adrfam": "ipv4", 00:26:05.973 "trsvcid": "8009", 00:26:05.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:05.973 "wait_for_attach": true, 00:26:05.973 "method": "bdev_nvme_start_discovery", 00:26:05.973 "req_id": 1 00:26:05.973 } 00:26:05.973 Got JSON-RPC error response 00:26:05.973 response: 00:26:05.973 { 00:26:05.973 "code": -17, 00:26:05.973 "message": "File exists" 00:26:05.973 } 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.973 12:55:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-15 12:55:58.778528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-15 12:55:58.778562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97bfc0 with addr=10.0.0.2, port=8010 00:26:06.911 [2024-07-15 12:55:58.778578] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:06.911 [2024-07-15 12:55:58.778587] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:06.911 [2024-07-15 12:55:58.778596] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:07.848 [2024-07-15 12:55:59.781003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.848 [2024-07-15 12:55:59.781034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97bfc0 with addr=10.0.0.2, port=8010 00:26:07.848 [2024-07-15 12:55:59.781049] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.848 [2024-07-15 12:55:59.781057] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.848 [2024-07-15 12:55:59.781066] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:09.226 [2024-07-15 12:56:00.783119] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:09.226 request: 00:26:09.226 { 00:26:09.226 "name": "nvme_second", 00:26:09.226 "trtype": "tcp", 00:26:09.226 "traddr": "10.0.0.2", 00:26:09.226 "adrfam": "ipv4", 00:26:09.226 "trsvcid": "8010", 00:26:09.226 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:09.226 "wait_for_attach": false, 00:26:09.226 "attach_timeout_ms": 3000, 00:26:09.226 "method": "bdev_nvme_start_discovery", 00:26:09.226 "req_id": 1 00:26:09.226 } 00:26:09.226 Got JSON-RPC error response 00:26:09.226 response: 00:26:09.226 { 00:26:09.226 "code": -110, 00:26:09.226 "message": "Connection timed out" 00:26:09.226 } 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4055831 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:09.226 rmmod nvme_tcp 00:26:09.226 rmmod nvme_fabrics 00:26:09.226 rmmod nvme_keyring 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4055643 ']' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4055643 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 4055643 ']' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 4055643 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4055643 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4055643' 00:26:09.226 killing process with pid 4055643 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 4055643 00:26:09.226 12:56:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 4055643 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.485 12:56:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.421 12:56:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.421 00:26:11.421 real 0m18.036s 00:26:11.421 user 0m22.506s 00:26:11.421 sys 0m5.774s 00:26:11.421 12:56:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:11.421 12:56:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.421 ************************************ 00:26:11.421 END TEST nvmf_host_discovery 00:26:11.421 ************************************ 00:26:11.421 12:56:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:11.421 12:56:03 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:11.421 12:56:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:11.421 12:56:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.421 12:56:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.421 ************************************ 00:26:11.421 START TEST nvmf_host_multipath_status 00:26:11.421 ************************************ 00:26:11.421 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:11.680 * Looking for test storage... 00:26:11.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.680 12:56:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.247 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.247 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.247 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.247 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.247 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.248 12:56:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:18.248 00:26:18.248 --- 10.0.0.2 ping statistics --- 00:26:18.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.248 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:18.248 00:26:18.248 --- 10.0.0.1 ping statistics --- 00:26:18.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.248 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4061172 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4061172 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4061172 ']' 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.248 [2024-07-15 12:56:09.276785] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:26:18.248 [2024-07-15 12:56:09.276838] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.248 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.248 [2024-07-15 12:56:09.360652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:18.248 [2024-07-15 12:56:09.453082] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.248 [2024-07-15 12:56:09.453122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.248 [2024-07-15 12:56:09.453132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.248 [2024-07-15 12:56:09.453141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.248 [2024-07-15 12:56:09.453149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.248 [2024-07-15 12:56:09.453201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.248 [2024-07-15 12:56:09.453205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4061172 00:26:18.248 12:56:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:18.248 [2024-07-15 12:56:10.001905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.248 12:56:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:18.653 Malloc0 00:26:18.653 12:56:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:18.653 12:56:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:18.911 12:56:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.168 [2024-07-15 12:56:11.072286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.168 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:19.427 [2024-07-15 12:56:11.325027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4061467 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4061467 /var/tmp/bdevperf.sock 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4061467 ']' 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:19.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.427 12:56:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:20.359 12:56:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.359 12:56:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:20.359 12:56:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:20.618 12:56:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:21.184 Nvme0n1 00:26:21.184 12:56:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:21.441 Nvme0n1 00:26:21.441 12:56:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:21.441 12:56:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:24.014 12:56:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:24.014 12:56:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:24.014 12:56:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.014 12:56:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:24.950 12:56:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:24.950 12:56:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.950 12:56:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.950 12:56:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.208 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.208 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.208 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.208 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.466 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.466 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.466 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.466 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.725 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.725 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.725 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.725 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.293 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.293 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.293 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.293 12:56:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.293 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.293 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.293 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.293 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.552 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.552 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:26.552 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.811 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.069 12:56:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:28.445 12:56:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:28.445 12:56:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:28.445 12:56:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.445 12:56:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.445 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.445 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:28.445 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.445 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.703 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.703 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.703 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.703 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.961 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.961 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.961 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.961 12:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.220 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.220 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.220 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.220 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.478 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.478 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.478 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.478 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.736 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.736 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:29.736 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:29.994 12:56:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:30.252 12:56:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.629 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.888 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.888 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.888 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.888 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.145 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.145 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.145 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.145 12:56:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.404 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.404 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:32.404 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.404 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.663 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.663 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.663 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.663 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.922 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.922 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:32.922 12:56:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:33.181 12:56:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:33.440 12:56:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:34.379 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:34.379 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.379 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.379 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.637 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.637 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:34.637 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.637 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.895 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.895 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.895 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.895 12:56:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.154 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.154 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.154 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.154 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.413 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.413 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.413 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.413 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.672 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.672 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:35.672 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.672 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.237 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.237 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:36.237 12:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:36.237 12:56:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:36.495 12:56:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.871 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.151 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.151 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.151 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.151 12:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.466 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.466 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.466 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.466 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.725 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.725 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:38.725 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.725 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.983 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.983 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:38.983 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.983 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.242 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.242 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:39.242 12:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:39.501 12:56:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:39.760 12:56:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:40.696 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:40.696 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:40.697 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.697 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.956 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.956 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.956 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.956 12:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:41.523 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.523 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:41.523 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.523 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.781 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.781 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.781 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.781 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.039 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.039 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:42.039 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.039 12:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.298 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.298 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.298 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.298 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.557 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.557 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:42.816 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:42.816 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:43.075 12:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:43.334 12:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:44.271 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:44.271 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.271 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.271 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.530 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.530 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:44.530 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.530 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.789 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.789 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.789 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.789 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.048 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.048 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.048 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.048 12:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.307 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.307 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.307 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.307 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.566 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.566 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.566 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.567 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.826 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.826 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:45.826 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.085 12:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.344 12:56:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.281 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.281 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.281 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.281 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.539 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.539 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.539 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.539 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.798 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.798 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.057 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.057 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.316 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.316 12:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.316 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.316 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.575 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.575 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.575 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.575 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.834 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.834 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.834 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.834 12:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.401 12:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.401 12:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:49.402 12:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.402 12:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.660 12:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.047 12:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.311 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.311 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.311 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.311 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.570 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.570 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.570 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.570 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.829 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.829 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.829 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.829 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:52.087 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.087 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:52.087 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.087 12:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.346 12:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.346 12:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:52.347 12:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.605 12:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:52.605 12:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.983 12:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.251 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.251 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.251 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.251 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:54.510 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.510 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:54.510 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.511 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.769 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.769 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.769 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.769 12:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.337 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.337 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:55.337 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.337 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4061467 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4061467 ']' 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4061467 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061467 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061467' 00:26:55.596 killing process with pid 4061467 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4061467 00:26:55.596 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4061467 00:26:55.859 Connection closed with partial response: 00:26:55.859 00:26:55.859 00:26:55.859 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4061467 00:26:55.859 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:55.859 [2024-07-15 12:56:11.399866] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:26:55.859 [2024-07-15 12:56:11.399930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061467 ] 00:26:55.859 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.859 [2024-07-15 12:56:11.513216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.859 [2024-07-15 12:56:11.655906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.859 Running I/O for 90 seconds... 00:26:55.859 [2024-07-15 12:56:28.115088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.859 [2024-07-15 12:56:28.115164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.859 [2024-07-15 12:56:28.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.859 [2024-07-15 12:56:28.115293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.859 [2024-07-15 12:56:28.115337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.859 [2024-07-15 12:56:28.115359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.115982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.116948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.116970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.860 [2024-07-15 12:56:28.117414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.860 [2024-07-15 12:56:28.117959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.860 [2024-07-15 12:56:28.117981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.118021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.118043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.118083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.118105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.118146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.118167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.118212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.118234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.118287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.118310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.119577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.861 [2024-07-15 12:56:28.119655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.119730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.119802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.119875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.119947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.119998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.120957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.120978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.121937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.121988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.122010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.122060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.122085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.122136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.861 [2024-07-15 12:56:28.122158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.861 [2024-07-15 12:56:28.122210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.862 [2024-07-15 12:56:28.122609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.122681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.122753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.122826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.122972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.123937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.123997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.124936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.124998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.862 [2024-07-15 12:56:28.125832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.862 [2024-07-15 12:56:28.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:28.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:28.125937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.488755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.488827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.488907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.488934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.488976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.488999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.489472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.489961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.489984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.863 [2024-07-15 12:56:44.490817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.490879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.490981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.491003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.491044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.491066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.495540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.495590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.863 [2024-07-15 12:56:44.495637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.863 [2024-07-15 12:56:44.495660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.495701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.495723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.495765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.495787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.495849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.495890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.495984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.864 [2024-07-15 12:56:44.496624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.496944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.864 [2024-07-15 12:56:44.496984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.864 [2024-07-15 12:56:44.497006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.864 Received shutdown signal, test time was about 33.896232 seconds 00:26:55.864 00:26:55.864 Latency(us) 00:26:55.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.864 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:55.864 Verification LBA range: start 0x0 length 0x4000 00:26:55.864 Nvme0n1 : 33.89 4711.33 18.40 0.00 0.00 27088.65 565.99 4057035.87 00:26:55.864 =================================================================================================================== 00:26:55.864 Total : 4711.33 18.40 0.00 0.00 27088.65 565.99 4057035.87 00:26:55.864 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.123 12:56:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.123 rmmod nvme_tcp 00:26:56.123 rmmod nvme_fabrics 00:26:56.123 rmmod nvme_keyring 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4061172 ']' 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4061172 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4061172 ']' 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4061172 00:26:56.123 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061172 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061172' 00:26:56.382 killing process with pid 4061172 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4061172 00:26:56.382 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4061172 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.641 12:56:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.547 12:56:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.547 00:26:58.547 real 0m47.085s 00:26:58.547 user 2m15.537s 00:26:58.547 sys 0m11.356s 00:26:58.547 12:56:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.547 12:56:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.547 ************************************ 00:26:58.547 END TEST nvmf_host_multipath_status 00:26:58.547 ************************************ 00:26:58.547 12:56:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:58.547 12:56:50 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:58.547 12:56:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:58.547 12:56:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.547 12:56:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.547 ************************************ 00:26:58.547 START TEST nvmf_discovery_remove_ifc 00:26:58.547 ************************************ 00:26:58.547 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:58.805 * Looking for test storage... 00:26:58.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.805 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.806 12:56:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:05.363 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:05.363 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:05.363 Found net devices under 0000:af:00.0: cvl_0_0 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:05.363 Found net devices under 0000:af:00.1: cvl_0_1 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.363 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:27:05.364 00:27:05.364 --- 10.0.0.2 ping statistics --- 00:27:05.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.364 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:05.364 00:27:05.364 --- 10.0.0.1 ping statistics --- 00:27:05.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.364 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4071657 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4071657 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4071657 ']' 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.364 12:56:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.364 [2024-07-15 12:56:56.506946] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:27:05.364 [2024-07-15 12:56:56.507001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.364 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.364 [2024-07-15 12:56:56.591501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.364 [2024-07-15 12:56:56.694668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.364 [2024-07-15 12:56:56.694712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.364 [2024-07-15 12:56:56.694726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.364 [2024-07-15 12:56:56.694741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.364 [2024-07-15 12:56:56.694751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.364 [2024-07-15 12:56:56.694782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.932 [2024-07-15 12:56:57.751200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.932 [2024-07-15 12:56:57.759392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:05.932 null0 00:27:05.932 [2024-07-15 12:56:57.791383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4071931 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4071931 /tmp/host.sock 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4071931 ']' 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:05.932 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.932 12:56:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.191 [2024-07-15 12:56:57.898865] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:27:06.191 [2024-07-15 12:56:57.898974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071931 ] 00:27:06.191 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.191 [2024-07-15 12:56:58.014895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.191 [2024-07-15 12:56:58.104701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 12:56:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 [2024-07-15 12:56:59.971439] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.063 [2024-07-15 12:56:59.971463] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.063 [2024-07-15 12:56:59.971481] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.322 [2024-07-15 12:57:00.057780] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:08.322 [2024-07-15 12:57:00.164669] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:08.322 [2024-07-15 12:57:00.164727] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:08.322 [2024-07-15 12:57:00.164754] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:08.322 [2024-07-15 12:57:00.164772] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:08.322 [2024-07-15 12:57:00.164800] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.322 [2024-07-15 12:57:00.169338] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f9d370 was disconnected and freed. delete nvme_qpair. 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:08.322 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.580 12:57:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.516 12:57:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:10.894 12:57:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.842 12:57:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.827 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:12.828 12:57:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:13.796 [2024-07-15 12:57:05.605475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:13.796 [2024-07-15 12:57:05.605523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.796 [2024-07-15 12:57:05.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.796 [2024-07-15 12:57:05.605552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.796 [2024-07-15 12:57:05.605562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.796 [2024-07-15 12:57:05.605573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.796 [2024-07-15 12:57:05.605584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.796 [2024-07-15 12:57:05.605595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.796 [2024-07-15 12:57:05.605604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.796 [2024-07-15 12:57:05.605616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.796 [2024-07-15 12:57:05.605626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.796 [2024-07-15 12:57:05.605635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63c00 is same with the state(5) to be set 00:27:13.796 [2024-07-15 12:57:05.615496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63c00 (9): Bad file descriptor 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.796 12:57:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.796 [2024-07-15 12:57:05.625800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:15.172 [2024-07-15 12:57:06.690372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:15.172 [2024-07-15 12:57:06.690451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63c00 with addr=10.0.0.2, port=4420 00:27:15.172 [2024-07-15 12:57:06.690481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63c00 is same with the state(5) to be set 00:27:15.172 [2024-07-15 12:57:06.690543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63c00 (9): Bad file descriptor 00:27:15.172 [2024-07-15 12:57:06.690673] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:15.172 [2024-07-15 12:57:06.690713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:15.172 [2024-07-15 12:57:06.690736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:15.172 [2024-07-15 12:57:06.690758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:15.172 [2024-07-15 12:57:06.690798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.172 [2024-07-15 12:57:06.690821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:15.172 12:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.172 12:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:15.172 12:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.107 [2024-07-15 12:57:07.693318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:16.107 [2024-07-15 12:57:07.693345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:16.107 [2024-07-15 12:57:07.693355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:16.107 [2024-07-15 12:57:07.693365] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:16.108 [2024-07-15 12:57:07.693381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.108 [2024-07-15 12:57:07.693404] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:16.108 [2024-07-15 12:57:07.693430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.108 [2024-07-15 12:57:07.693442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.108 [2024-07-15 12:57:07.693456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.108 [2024-07-15 12:57:07.693466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.108 [2024-07-15 12:57:07.693477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.108 [2024-07-15 12:57:07.693487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.108 [2024-07-15 12:57:07.693499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.108 [2024-07-15 12:57:07.693508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.108 [2024-07-15 12:57:07.693520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.108 [2024-07-15 12:57:07.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.108 [2024-07-15 12:57:07.693541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:16.108 [2024-07-15 12:57:07.694201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63080 (9): Bad file descriptor 00:27:16.108 [2024-07-15 12:57:07.695216] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:16.108 [2024-07-15 12:57:07.695236] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:16.108 12:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:17.042 12:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:17.975 [2024-07-15 12:57:09.706837] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:17.975 [2024-07-15 12:57:09.706858] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:17.975 [2024-07-15 12:57:09.706875] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.975 [2024-07-15 12:57:09.793184] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.232 12:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.232 [2024-07-15 12:57:10.012539] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:18.232 [2024-07-15 12:57:10.012582] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:18.232 [2024-07-15 12:57:10.012606] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:18.232 [2024-07-15 12:57:10.012623] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:18.232 [2024-07-15 12:57:10.012632] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:18.232 [2024-07-15 12:57:10.016015] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f6a920 was disconnected and freed. delete nvme_qpair. 00:27:18.232 12:57:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:18.232 12:57:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4071931 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4071931 ']' 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4071931 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.165 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4071931 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4071931' 00:27:19.424 killing process with pid 4071931 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4071931 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4071931 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.424 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.424 rmmod nvme_tcp 00:27:19.424 rmmod nvme_fabrics 00:27:19.424 rmmod nvme_keyring 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4071657 ']' 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4071657 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4071657 ']' 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4071657 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4071657 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4071657' 00:27:19.683 killing process with pid 4071657 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4071657 00:27:19.683 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4071657 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.943 12:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.849 12:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.849 00:27:21.849 real 0m23.315s 00:27:21.849 user 0m30.107s 00:27:21.849 sys 0m5.976s 00:27:21.849 12:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.849 12:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.849 ************************************ 00:27:21.849 END TEST nvmf_discovery_remove_ifc 00:27:21.849 ************************************ 00:27:22.108 12:57:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:22.108 12:57:13 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:22.108 12:57:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:22.108 12:57:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.108 12:57:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.108 ************************************ 00:27:22.108 START TEST nvmf_identify_kernel_target 00:27:22.108 ************************************ 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:22.108 * Looking for test storage... 00:27:22.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.108 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.109 12:57:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:28.682 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:28.682 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.682 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:28.683 Found net devices under 0000:af:00.0: cvl_0_0 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:28.683 Found net devices under 0000:af:00.1: cvl_0_1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:28.683 00:27:28.683 --- 10.0.0.2 ping statistics --- 00:27:28.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.683 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:27:28.683 00:27:28.683 --- 10.0.0.1 ping statistics --- 00:27:28.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.683 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:28.683 12:57:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:30.591 Waiting for block devices as requested 00:27:30.591 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:30.850 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:30.850 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:31.110 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:31.110 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:31.110 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:31.110 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:31.384 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:31.385 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:31.385 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:31.648 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:31.648 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:31.648 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:31.648 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:31.906 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:31.906 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:31.907 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:32.166 No valid GPT data, bailing 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:32.166 12:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:32.166 00:27:32.166 Discovery Log Number of Records 2, Generation counter 2 00:27:32.166 =====Discovery Log Entry 0====== 00:27:32.166 trtype: tcp 00:27:32.166 adrfam: ipv4 00:27:32.166 subtype: current discovery subsystem 00:27:32.166 treq: not specified, sq flow control disable supported 00:27:32.166 portid: 1 00:27:32.166 trsvcid: 4420 00:27:32.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:32.166 traddr: 10.0.0.1 00:27:32.166 eflags: none 00:27:32.166 sectype: none 00:27:32.166 =====Discovery Log Entry 1====== 00:27:32.166 trtype: tcp 00:27:32.166 adrfam: ipv4 00:27:32.166 subtype: nvme subsystem 00:27:32.166 treq: not specified, sq flow control disable supported 00:27:32.166 portid: 1 00:27:32.166 trsvcid: 4420 00:27:32.166 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:32.166 traddr: 10.0.0.1 00:27:32.166 eflags: none 00:27:32.166 sectype: none 00:27:32.166 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:32.166 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:32.166 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.427 ===================================================== 00:27:32.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:32.427 ===================================================== 00:27:32.427 Controller Capabilities/Features 00:27:32.427 ================================ 00:27:32.427 Vendor ID: 0000 00:27:32.427 Subsystem Vendor ID: 0000 00:27:32.427 Serial Number: 3e0871a881fa0addba2a 00:27:32.427 Model Number: Linux 00:27:32.427 Firmware Version: 6.7.0-68 00:27:32.427 Recommended Arb Burst: 0 00:27:32.427 IEEE OUI Identifier: 00 00 00 00:27:32.427 Multi-path I/O 00:27:32.427 May have multiple subsystem ports: No 00:27:32.427 May have multiple controllers: No 00:27:32.427 Associated with SR-IOV VF: No 00:27:32.427 Max Data Transfer Size: Unlimited 00:27:32.427 Max Number of Namespaces: 0 00:27:32.427 Max Number of I/O Queues: 1024 00:27:32.427 NVMe Specification Version (VS): 1.3 00:27:32.427 NVMe Specification Version (Identify): 1.3 00:27:32.427 Maximum Queue Entries: 1024 00:27:32.427 Contiguous Queues Required: No 00:27:32.427 Arbitration Mechanisms Supported 00:27:32.427 Weighted Round Robin: Not Supported 00:27:32.427 Vendor Specific: Not Supported 00:27:32.427 Reset Timeout: 7500 ms 00:27:32.427 Doorbell Stride: 4 bytes 00:27:32.427 NVM Subsystem Reset: Not Supported 00:27:32.427 Command Sets Supported 00:27:32.427 NVM Command Set: Supported 00:27:32.427 Boot Partition: Not Supported 00:27:32.427 Memory Page Size Minimum: 4096 bytes 00:27:32.427 Memory Page Size Maximum: 4096 bytes 00:27:32.427 Persistent Memory Region: Not Supported 00:27:32.427 Optional Asynchronous Events Supported 00:27:32.427 Namespace Attribute Notices: Not Supported 00:27:32.427 Firmware Activation Notices: Not Supported 00:27:32.427 ANA Change Notices: Not Supported 00:27:32.427 PLE Aggregate Log Change Notices: Not Supported 00:27:32.427 LBA Status Info Alert Notices: Not Supported 00:27:32.427 EGE Aggregate Log Change Notices: Not Supported 00:27:32.427 Normal NVM Subsystem Shutdown event: Not Supported 00:27:32.427 Zone Descriptor Change Notices: Not Supported 00:27:32.427 Discovery Log Change Notices: Supported 00:27:32.427 Controller Attributes 00:27:32.427 128-bit Host Identifier: Not Supported 00:27:32.427 Non-Operational Permissive Mode: Not Supported 00:27:32.427 NVM Sets: Not Supported 00:27:32.427 Read Recovery Levels: Not Supported 00:27:32.427 Endurance Groups: Not Supported 00:27:32.427 Predictable Latency Mode: Not Supported 00:27:32.427 Traffic Based Keep ALive: Not Supported 00:27:32.427 Namespace Granularity: Not Supported 00:27:32.427 SQ Associations: Not Supported 00:27:32.427 UUID List: Not Supported 00:27:32.427 Multi-Domain Subsystem: Not Supported 00:27:32.427 Fixed Capacity Management: Not Supported 00:27:32.427 Variable Capacity Management: Not Supported 00:27:32.427 Delete Endurance Group: Not Supported 00:27:32.427 Delete NVM Set: Not Supported 00:27:32.427 Extended LBA Formats Supported: Not Supported 00:27:32.427 Flexible Data Placement Supported: Not Supported 00:27:32.427 00:27:32.427 Controller Memory Buffer Support 00:27:32.427 ================================ 00:27:32.427 Supported: No 00:27:32.427 00:27:32.427 Persistent Memory Region Support 00:27:32.427 ================================ 00:27:32.427 Supported: No 00:27:32.427 00:27:32.427 Admin Command Set Attributes 00:27:32.427 ============================ 00:27:32.427 Security Send/Receive: Not Supported 00:27:32.427 Format NVM: Not Supported 00:27:32.427 Firmware Activate/Download: Not Supported 00:27:32.427 Namespace Management: Not Supported 00:27:32.427 Device Self-Test: Not Supported 00:27:32.427 Directives: Not Supported 00:27:32.427 NVMe-MI: Not Supported 00:27:32.427 Virtualization Management: Not Supported 00:27:32.427 Doorbell Buffer Config: Not Supported 00:27:32.427 Get LBA Status Capability: Not Supported 00:27:32.427 Command & Feature Lockdown Capability: Not Supported 00:27:32.427 Abort Command Limit: 1 00:27:32.427 Async Event Request Limit: 1 00:27:32.427 Number of Firmware Slots: N/A 00:27:32.427 Firmware Slot 1 Read-Only: N/A 00:27:32.427 Firmware Activation Without Reset: N/A 00:27:32.427 Multiple Update Detection Support: N/A 00:27:32.427 Firmware Update Granularity: No Information Provided 00:27:32.427 Per-Namespace SMART Log: No 00:27:32.427 Asymmetric Namespace Access Log Page: Not Supported 00:27:32.427 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:32.427 Command Effects Log Page: Not Supported 00:27:32.427 Get Log Page Extended Data: Supported 00:27:32.427 Telemetry Log Pages: Not Supported 00:27:32.427 Persistent Event Log Pages: Not Supported 00:27:32.427 Supported Log Pages Log Page: May Support 00:27:32.427 Commands Supported & Effects Log Page: Not Supported 00:27:32.427 Feature Identifiers & Effects Log Page:May Support 00:27:32.427 NVMe-MI Commands & Effects Log Page: May Support 00:27:32.427 Data Area 4 for Telemetry Log: Not Supported 00:27:32.427 Error Log Page Entries Supported: 1 00:27:32.427 Keep Alive: Not Supported 00:27:32.427 00:27:32.427 NVM Command Set Attributes 00:27:32.427 ========================== 00:27:32.427 Submission Queue Entry Size 00:27:32.427 Max: 1 00:27:32.427 Min: 1 00:27:32.427 Completion Queue Entry Size 00:27:32.427 Max: 1 00:27:32.427 Min: 1 00:27:32.427 Number of Namespaces: 0 00:27:32.427 Compare Command: Not Supported 00:27:32.427 Write Uncorrectable Command: Not Supported 00:27:32.427 Dataset Management Command: Not Supported 00:27:32.427 Write Zeroes Command: Not Supported 00:27:32.427 Set Features Save Field: Not Supported 00:27:32.427 Reservations: Not Supported 00:27:32.427 Timestamp: Not Supported 00:27:32.427 Copy: Not Supported 00:27:32.427 Volatile Write Cache: Not Present 00:27:32.427 Atomic Write Unit (Normal): 1 00:27:32.427 Atomic Write Unit (PFail): 1 00:27:32.427 Atomic Compare & Write Unit: 1 00:27:32.427 Fused Compare & Write: Not Supported 00:27:32.427 Scatter-Gather List 00:27:32.427 SGL Command Set: Supported 00:27:32.427 SGL Keyed: Not Supported 00:27:32.427 SGL Bit Bucket Descriptor: Not Supported 00:27:32.427 SGL Metadata Pointer: Not Supported 00:27:32.427 Oversized SGL: Not Supported 00:27:32.427 SGL Metadata Address: Not Supported 00:27:32.427 SGL Offset: Supported 00:27:32.427 Transport SGL Data Block: Not Supported 00:27:32.427 Replay Protected Memory Block: Not Supported 00:27:32.427 00:27:32.427 Firmware Slot Information 00:27:32.427 ========================= 00:27:32.427 Active slot: 0 00:27:32.427 00:27:32.427 00:27:32.427 Error Log 00:27:32.427 ========= 00:27:32.427 00:27:32.427 Active Namespaces 00:27:32.427 ================= 00:27:32.427 Discovery Log Page 00:27:32.427 ================== 00:27:32.427 Generation Counter: 2 00:27:32.427 Number of Records: 2 00:27:32.428 Record Format: 0 00:27:32.428 00:27:32.428 Discovery Log Entry 0 00:27:32.428 ---------------------- 00:27:32.428 Transport Type: 3 (TCP) 00:27:32.428 Address Family: 1 (IPv4) 00:27:32.428 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:32.428 Entry Flags: 00:27:32.428 Duplicate Returned Information: 0 00:27:32.428 Explicit Persistent Connection Support for Discovery: 0 00:27:32.428 Transport Requirements: 00:27:32.428 Secure Channel: Not Specified 00:27:32.428 Port ID: 1 (0x0001) 00:27:32.428 Controller ID: 65535 (0xffff) 00:27:32.428 Admin Max SQ Size: 32 00:27:32.428 Transport Service Identifier: 4420 00:27:32.428 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:32.428 Transport Address: 10.0.0.1 00:27:32.428 Discovery Log Entry 1 00:27:32.428 ---------------------- 00:27:32.428 Transport Type: 3 (TCP) 00:27:32.428 Address Family: 1 (IPv4) 00:27:32.428 Subsystem Type: 2 (NVM Subsystem) 00:27:32.428 Entry Flags: 00:27:32.428 Duplicate Returned Information: 0 00:27:32.428 Explicit Persistent Connection Support for Discovery: 0 00:27:32.428 Transport Requirements: 00:27:32.428 Secure Channel: Not Specified 00:27:32.428 Port ID: 1 (0x0001) 00:27:32.428 Controller ID: 65535 (0xffff) 00:27:32.428 Admin Max SQ Size: 32 00:27:32.428 Transport Service Identifier: 4420 00:27:32.428 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:32.428 Transport Address: 10.0.0.1 00:27:32.428 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.428 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.428 get_feature(0x01) failed 00:27:32.428 get_feature(0x02) failed 00:27:32.428 get_feature(0x04) failed 00:27:32.428 ===================================================== 00:27:32.428 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:32.428 ===================================================== 00:27:32.428 Controller Capabilities/Features 00:27:32.428 ================================ 00:27:32.428 Vendor ID: 0000 00:27:32.428 Subsystem Vendor ID: 0000 00:27:32.428 Serial Number: 62cb455f43b0a3dda742 00:27:32.428 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:32.428 Firmware Version: 6.7.0-68 00:27:32.428 Recommended Arb Burst: 6 00:27:32.428 IEEE OUI Identifier: 00 00 00 00:27:32.428 Multi-path I/O 00:27:32.428 May have multiple subsystem ports: Yes 00:27:32.428 May have multiple controllers: Yes 00:27:32.428 Associated with SR-IOV VF: No 00:27:32.428 Max Data Transfer Size: Unlimited 00:27:32.428 Max Number of Namespaces: 1024 00:27:32.428 Max Number of I/O Queues: 128 00:27:32.428 NVMe Specification Version (VS): 1.3 00:27:32.428 NVMe Specification Version (Identify): 1.3 00:27:32.428 Maximum Queue Entries: 1024 00:27:32.428 Contiguous Queues Required: No 00:27:32.428 Arbitration Mechanisms Supported 00:27:32.428 Weighted Round Robin: Not Supported 00:27:32.428 Vendor Specific: Not Supported 00:27:32.428 Reset Timeout: 7500 ms 00:27:32.428 Doorbell Stride: 4 bytes 00:27:32.428 NVM Subsystem Reset: Not Supported 00:27:32.428 Command Sets Supported 00:27:32.428 NVM Command Set: Supported 00:27:32.428 Boot Partition: Not Supported 00:27:32.428 Memory Page Size Minimum: 4096 bytes 00:27:32.428 Memory Page Size Maximum: 4096 bytes 00:27:32.428 Persistent Memory Region: Not Supported 00:27:32.428 Optional Asynchronous Events Supported 00:27:32.428 Namespace Attribute Notices: Supported 00:27:32.428 Firmware Activation Notices: Not Supported 00:27:32.428 ANA Change Notices: Supported 00:27:32.428 PLE Aggregate Log Change Notices: Not Supported 00:27:32.428 LBA Status Info Alert Notices: Not Supported 00:27:32.428 EGE Aggregate Log Change Notices: Not Supported 00:27:32.428 Normal NVM Subsystem Shutdown event: Not Supported 00:27:32.428 Zone Descriptor Change Notices: Not Supported 00:27:32.428 Discovery Log Change Notices: Not Supported 00:27:32.428 Controller Attributes 00:27:32.428 128-bit Host Identifier: Supported 00:27:32.428 Non-Operational Permissive Mode: Not Supported 00:27:32.428 NVM Sets: Not Supported 00:27:32.428 Read Recovery Levels: Not Supported 00:27:32.428 Endurance Groups: Not Supported 00:27:32.428 Predictable Latency Mode: Not Supported 00:27:32.428 Traffic Based Keep ALive: Supported 00:27:32.428 Namespace Granularity: Not Supported 00:27:32.428 SQ Associations: Not Supported 00:27:32.428 UUID List: Not Supported 00:27:32.428 Multi-Domain Subsystem: Not Supported 00:27:32.428 Fixed Capacity Management: Not Supported 00:27:32.428 Variable Capacity Management: Not Supported 00:27:32.428 Delete Endurance Group: Not Supported 00:27:32.428 Delete NVM Set: Not Supported 00:27:32.428 Extended LBA Formats Supported: Not Supported 00:27:32.428 Flexible Data Placement Supported: Not Supported 00:27:32.428 00:27:32.428 Controller Memory Buffer Support 00:27:32.428 ================================ 00:27:32.428 Supported: No 00:27:32.428 00:27:32.428 Persistent Memory Region Support 00:27:32.428 ================================ 00:27:32.428 Supported: No 00:27:32.428 00:27:32.428 Admin Command Set Attributes 00:27:32.428 ============================ 00:27:32.428 Security Send/Receive: Not Supported 00:27:32.428 Format NVM: Not Supported 00:27:32.428 Firmware Activate/Download: Not Supported 00:27:32.428 Namespace Management: Not Supported 00:27:32.428 Device Self-Test: Not Supported 00:27:32.428 Directives: Not Supported 00:27:32.428 NVMe-MI: Not Supported 00:27:32.428 Virtualization Management: Not Supported 00:27:32.428 Doorbell Buffer Config: Not Supported 00:27:32.428 Get LBA Status Capability: Not Supported 00:27:32.428 Command & Feature Lockdown Capability: Not Supported 00:27:32.428 Abort Command Limit: 4 00:27:32.428 Async Event Request Limit: 4 00:27:32.428 Number of Firmware Slots: N/A 00:27:32.428 Firmware Slot 1 Read-Only: N/A 00:27:32.428 Firmware Activation Without Reset: N/A 00:27:32.428 Multiple Update Detection Support: N/A 00:27:32.428 Firmware Update Granularity: No Information Provided 00:27:32.428 Per-Namespace SMART Log: Yes 00:27:32.428 Asymmetric Namespace Access Log Page: Supported 00:27:32.428 ANA Transition Time : 10 sec 00:27:32.428 00:27:32.428 Asymmetric Namespace Access Capabilities 00:27:32.428 ANA Optimized State : Supported 00:27:32.428 ANA Non-Optimized State : Supported 00:27:32.428 ANA Inaccessible State : Supported 00:27:32.428 ANA Persistent Loss State : Supported 00:27:32.428 ANA Change State : Supported 00:27:32.428 ANAGRPID is not changed : No 00:27:32.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:32.428 00:27:32.428 ANA Group Identifier Maximum : 128 00:27:32.428 Number of ANA Group Identifiers : 128 00:27:32.428 Max Number of Allowed Namespaces : 1024 00:27:32.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:32.428 Command Effects Log Page: Supported 00:27:32.428 Get Log Page Extended Data: Supported 00:27:32.428 Telemetry Log Pages: Not Supported 00:27:32.428 Persistent Event Log Pages: Not Supported 00:27:32.428 Supported Log Pages Log Page: May Support 00:27:32.428 Commands Supported & Effects Log Page: Not Supported 00:27:32.428 Feature Identifiers & Effects Log Page:May Support 00:27:32.428 NVMe-MI Commands & Effects Log Page: May Support 00:27:32.428 Data Area 4 for Telemetry Log: Not Supported 00:27:32.428 Error Log Page Entries Supported: 128 00:27:32.428 Keep Alive: Supported 00:27:32.428 Keep Alive Granularity: 1000 ms 00:27:32.428 00:27:32.428 NVM Command Set Attributes 00:27:32.428 ========================== 00:27:32.428 Submission Queue Entry Size 00:27:32.428 Max: 64 00:27:32.428 Min: 64 00:27:32.428 Completion Queue Entry Size 00:27:32.428 Max: 16 00:27:32.428 Min: 16 00:27:32.428 Number of Namespaces: 1024 00:27:32.428 Compare Command: Not Supported 00:27:32.428 Write Uncorrectable Command: Not Supported 00:27:32.428 Dataset Management Command: Supported 00:27:32.428 Write Zeroes Command: Supported 00:27:32.428 Set Features Save Field: Not Supported 00:27:32.428 Reservations: Not Supported 00:27:32.428 Timestamp: Not Supported 00:27:32.428 Copy: Not Supported 00:27:32.428 Volatile Write Cache: Present 00:27:32.428 Atomic Write Unit (Normal): 1 00:27:32.428 Atomic Write Unit (PFail): 1 00:27:32.428 Atomic Compare & Write Unit: 1 00:27:32.428 Fused Compare & Write: Not Supported 00:27:32.428 Scatter-Gather List 00:27:32.428 SGL Command Set: Supported 00:27:32.428 SGL Keyed: Not Supported 00:27:32.428 SGL Bit Bucket Descriptor: Not Supported 00:27:32.428 SGL Metadata Pointer: Not Supported 00:27:32.428 Oversized SGL: Not Supported 00:27:32.428 SGL Metadata Address: Not Supported 00:27:32.428 SGL Offset: Supported 00:27:32.428 Transport SGL Data Block: Not Supported 00:27:32.428 Replay Protected Memory Block: Not Supported 00:27:32.428 00:27:32.428 Firmware Slot Information 00:27:32.428 ========================= 00:27:32.428 Active slot: 0 00:27:32.428 00:27:32.428 Asymmetric Namespace Access 00:27:32.428 =========================== 00:27:32.428 Change Count : 0 00:27:32.428 Number of ANA Group Descriptors : 1 00:27:32.429 ANA Group Descriptor : 0 00:27:32.429 ANA Group ID : 1 00:27:32.429 Number of NSID Values : 1 00:27:32.429 Change Count : 0 00:27:32.429 ANA State : 1 00:27:32.429 Namespace Identifier : 1 00:27:32.429 00:27:32.429 Commands Supported and Effects 00:27:32.429 ============================== 00:27:32.429 Admin Commands 00:27:32.429 -------------- 00:27:32.429 Get Log Page (02h): Supported 00:27:32.429 Identify (06h): Supported 00:27:32.429 Abort (08h): Supported 00:27:32.429 Set Features (09h): Supported 00:27:32.429 Get Features (0Ah): Supported 00:27:32.429 Asynchronous Event Request (0Ch): Supported 00:27:32.429 Keep Alive (18h): Supported 00:27:32.429 I/O Commands 00:27:32.429 ------------ 00:27:32.429 Flush (00h): Supported 00:27:32.429 Write (01h): Supported LBA-Change 00:27:32.429 Read (02h): Supported 00:27:32.429 Write Zeroes (08h): Supported LBA-Change 00:27:32.429 Dataset Management (09h): Supported 00:27:32.429 00:27:32.429 Error Log 00:27:32.429 ========= 00:27:32.429 Entry: 0 00:27:32.429 Error Count: 0x3 00:27:32.429 Submission Queue Id: 0x0 00:27:32.429 Command Id: 0x5 00:27:32.429 Phase Bit: 0 00:27:32.429 Status Code: 0x2 00:27:32.429 Status Code Type: 0x0 00:27:32.429 Do Not Retry: 1 00:27:32.429 Error Location: 0x28 00:27:32.429 LBA: 0x0 00:27:32.429 Namespace: 0x0 00:27:32.429 Vendor Log Page: 0x0 00:27:32.429 ----------- 00:27:32.429 Entry: 1 00:27:32.429 Error Count: 0x2 00:27:32.429 Submission Queue Id: 0x0 00:27:32.429 Command Id: 0x5 00:27:32.429 Phase Bit: 0 00:27:32.429 Status Code: 0x2 00:27:32.429 Status Code Type: 0x0 00:27:32.429 Do Not Retry: 1 00:27:32.429 Error Location: 0x28 00:27:32.429 LBA: 0x0 00:27:32.429 Namespace: 0x0 00:27:32.429 Vendor Log Page: 0x0 00:27:32.429 ----------- 00:27:32.429 Entry: 2 00:27:32.429 Error Count: 0x1 00:27:32.429 Submission Queue Id: 0x0 00:27:32.429 Command Id: 0x4 00:27:32.429 Phase Bit: 0 00:27:32.429 Status Code: 0x2 00:27:32.429 Status Code Type: 0x0 00:27:32.429 Do Not Retry: 1 00:27:32.429 Error Location: 0x28 00:27:32.429 LBA: 0x0 00:27:32.429 Namespace: 0x0 00:27:32.429 Vendor Log Page: 0x0 00:27:32.429 00:27:32.429 Number of Queues 00:27:32.429 ================ 00:27:32.429 Number of I/O Submission Queues: 128 00:27:32.429 Number of I/O Completion Queues: 128 00:27:32.429 00:27:32.429 ZNS Specific Controller Data 00:27:32.429 ============================ 00:27:32.429 Zone Append Size Limit: 0 00:27:32.429 00:27:32.429 00:27:32.429 Active Namespaces 00:27:32.429 ================= 00:27:32.429 get_feature(0x05) failed 00:27:32.429 Namespace ID:1 00:27:32.429 Command Set Identifier: NVM (00h) 00:27:32.429 Deallocate: Supported 00:27:32.429 Deallocated/Unwritten Error: Not Supported 00:27:32.429 Deallocated Read Value: Unknown 00:27:32.429 Deallocate in Write Zeroes: Not Supported 00:27:32.429 Deallocated Guard Field: 0xFFFF 00:27:32.429 Flush: Supported 00:27:32.429 Reservation: Not Supported 00:27:32.429 Namespace Sharing Capabilities: Multiple Controllers 00:27:32.429 Size (in LBAs): 1953525168 (931GiB) 00:27:32.429 Capacity (in LBAs): 1953525168 (931GiB) 00:27:32.429 Utilization (in LBAs): 1953525168 (931GiB) 00:27:32.429 UUID: cdbc7ff8-97a7-462b-b596-8ba9e849cec8 00:27:32.429 Thin Provisioning: Not Supported 00:27:32.429 Per-NS Atomic Units: Yes 00:27:32.429 Atomic Boundary Size (Normal): 0 00:27:32.429 Atomic Boundary Size (PFail): 0 00:27:32.429 Atomic Boundary Offset: 0 00:27:32.429 NGUID/EUI64 Never Reused: No 00:27:32.429 ANA group ID: 1 00:27:32.429 Namespace Write Protected: No 00:27:32.429 Number of LBA Formats: 1 00:27:32.429 Current LBA Format: LBA Format #00 00:27:32.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:32.429 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.429 rmmod nvme_tcp 00:27:32.429 rmmod nvme_fabrics 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.429 12:57:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:34.968 12:57:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:37.496 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:37.496 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:38.429 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:38.429 00:27:38.429 real 0m16.431s 00:27:38.429 user 0m4.087s 00:27:38.429 sys 0m8.578s 00:27:38.429 12:57:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.429 12:57:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 ************************************ 00:27:38.429 END TEST nvmf_identify_kernel_target 00:27:38.429 ************************************ 00:27:38.429 12:57:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:38.429 12:57:30 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:38.429 12:57:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:38.429 12:57:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.429 12:57:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 ************************************ 00:27:38.429 START TEST nvmf_auth_host 00:27:38.429 ************************************ 00:27:38.429 12:57:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:38.687 * Looking for test storage... 00:27:38.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.687 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.688 12:57:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:45.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:45.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.256 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:45.257 Found net devices under 0000:af:00.0: cvl_0_0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:45.257 Found net devices under 0000:af:00.1: cvl_0_1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:45.257 00:27:45.257 --- 10.0.0.2 ping statistics --- 00:27:45.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.257 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:45.257 00:27:45.257 --- 10.0.0.1 ping statistics --- 00:27:45.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.257 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4085023 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4085023 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4085023 ']' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c9c2d632f8ad775eec7c01c5e0037512 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gj9 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c9c2d632f8ad775eec7c01c5e0037512 0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c9c2d632f8ad775eec7c01c5e0037512 0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c9c2d632f8ad775eec7c01c5e0037512 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gj9 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gj9 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Gj9 00:27:45.257 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de3158558d21a888e6c84ba5ba850137b67ea70e6ac131e8230656877206dad0 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3nN 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de3158558d21a888e6c84ba5ba850137b67ea70e6ac131e8230656877206dad0 3 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de3158558d21a888e6c84ba5ba850137b67ea70e6ac131e8230656877206dad0 3 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de3158558d21a888e6c84ba5ba850137b67ea70e6ac131e8230656877206dad0 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3nN 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3nN 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3nN 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bf392480c5ad08a8615a78e3ed486ddb6467a171fce794f6 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eae 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bf392480c5ad08a8615a78e3ed486ddb6467a171fce794f6 0 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bf392480c5ad08a8615a78e3ed486ddb6467a171fce794f6 0 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bf392480c5ad08a8615a78e3ed486ddb6467a171fce794f6 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eae 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eae 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eae 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ffebbc03b09ab4fd28b8e8e66a99c1aaef1219fd75519996 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1Kb 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ffebbc03b09ab4fd28b8e8e66a99c1aaef1219fd75519996 2 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ffebbc03b09ab4fd28b8e8e66a99c1aaef1219fd75519996 2 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ffebbc03b09ab4fd28b8e8e66a99c1aaef1219fd75519996 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1Kb 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1Kb 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1Kb 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.258 12:57:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b9ac2e6d0e03045496280bd33ac4dd32 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wZ1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b9ac2e6d0e03045496280bd33ac4dd32 1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b9ac2e6d0e03045496280bd33ac4dd32 1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b9ac2e6d0e03045496280bd33ac4dd32 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wZ1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wZ1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wZ1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af09b7dd0d9629c49b4f6499ee67d7be 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Olq 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af09b7dd0d9629c49b4f6499ee67d7be 1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af09b7dd0d9629c49b4f6499ee67d7be 1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af09b7dd0d9629c49b4f6499ee67d7be 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Olq 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Olq 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Olq 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8fe46d58fe00b9f3199ca7031d8b5065f4f1e42caece6fc0 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JMT 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8fe46d58fe00b9f3199ca7031d8b5065f4f1e42caece6fc0 2 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8fe46d58fe00b9f3199ca7031d8b5065f4f1e42caece6fc0 2 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8fe46d58fe00b9f3199ca7031d8b5065f4f1e42caece6fc0 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:45.258 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JMT 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JMT 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JMT 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b0b37fb01c80fc873cb238c022409403 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ge3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b0b37fb01c80fc873cb238c022409403 0 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b0b37fb01c80fc873cb238c022409403 0 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b0b37fb01c80fc873cb238c022409403 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ge3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ge3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ge3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b33bbcf89021b4c14cc00fe6b40ae932a5e60c56b6e1f0f368f12507df76b775 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qrM 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b33bbcf89021b4c14cc00fe6b40ae932a5e60c56b6e1f0f368f12507df76b775 3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b33bbcf89021b4c14cc00fe6b40ae932a5e60c56b6e1f0f368f12507df76b775 3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b33bbcf89021b4c14cc00fe6b40ae932a5e60c56b6e1f0f368f12507df76b775 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qrM 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qrM 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qrM 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4085023 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4085023 ']' 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.517 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gj9 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3nN ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3nN 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eae 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1Kb ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Kb 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wZ1 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Olq ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Olq 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JMT 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ge3 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ge3 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qrM 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:45.775 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:45.776 12:57:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:49.054 Waiting for block devices as requested 00:27:49.054 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:49.054 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:49.054 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:49.313 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:49.313 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:49.313 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:49.571 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:49.571 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:49.571 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:49.829 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:49.829 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:49.829 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:50.397 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:50.398 No valid GPT data, bailing 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:50.398 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:50.657 00:27:50.657 Discovery Log Number of Records 2, Generation counter 2 00:27:50.657 =====Discovery Log Entry 0====== 00:27:50.657 trtype: tcp 00:27:50.657 adrfam: ipv4 00:27:50.657 subtype: current discovery subsystem 00:27:50.657 treq: not specified, sq flow control disable supported 00:27:50.657 portid: 1 00:27:50.657 trsvcid: 4420 00:27:50.657 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:50.657 traddr: 10.0.0.1 00:27:50.657 eflags: none 00:27:50.657 sectype: none 00:27:50.657 =====Discovery Log Entry 1====== 00:27:50.657 trtype: tcp 00:27:50.657 adrfam: ipv4 00:27:50.657 subtype: nvme subsystem 00:27:50.657 treq: not specified, sq flow control disable supported 00:27:50.657 portid: 1 00:27:50.657 trsvcid: 4420 00:27:50.657 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:50.657 traddr: 10.0.0.1 00:27:50.657 eflags: none 00:27:50.657 sectype: none 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.657 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.950 nvme0n1 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.950 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.951 nvme0n1 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.951 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.257 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 nvme0n1 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.258 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.517 nvme0n1 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.517 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 nvme0n1 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.775 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.776 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.045 nvme0n1 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.045 12:57:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 nvme0n1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.306 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.564 nvme0n1 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.564 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.565 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.821 nvme0n1 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.821 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.079 nvme0n1 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.079 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.080 12:57:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.337 nvme0n1 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.337 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.594 nvme0n1 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.594 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.852 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 nvme0n1 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.110 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.111 12:57:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 nvme0n1 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.368 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.625 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.883 nvme0n1 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.883 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.139 nvme0n1 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.139 12:57:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.139 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.140 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.705 nvme0n1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.705 12:57:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.270 nvme0n1 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.270 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.271 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.835 nvme0n1 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.835 12:57:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 nvme0n1 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.401 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 nvme0n1 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 12:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.904 nvme0n1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.904 12:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.838 nvme0n1 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.838 12:57:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.839 12:57:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.839 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.839 12:57:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.406 nvme0n1 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.406 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:00.664 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.665 12:57:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 nvme0n1 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.600 12:57:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.166 nvme0n1 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.166 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.425 nvme0n1 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:02.425 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.426 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.685 nvme0n1 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.685 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.944 nvme0n1 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.944 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.203 nvme0n1 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.203 12:57:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.203 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.462 nvme0n1 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.462 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.463 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.463 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.737 nvme0n1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.737 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.995 nvme0n1 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:03.995 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.996 12:57:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.254 nvme0n1 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.254 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.510 nvme0n1 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.510 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.511 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.768 nvme0n1 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.768 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.769 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.026 nvme0n1 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.026 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.283 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.283 12:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.283 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.283 12:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.283 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.579 nvme0n1 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.579 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 nvme0n1 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 12:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.095 nvme0n1 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.095 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.353 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.611 nvme0n1 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.611 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.612 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.178 nvme0n1 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.178 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.179 12:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 nvme0n1 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.746 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.313 nvme0n1 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.313 12:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.313 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.881 nvme0n1 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.881 12:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.448 nvme0n1 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.448 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.449 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.016 nvme0n1 00:28:10.016 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 12:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.275 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.276 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.276 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.276 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.276 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.212 nvme0n1 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.212 12:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 nvme0n1 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.045 12:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.046 12:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.046 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.046 12:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.611 nvme0n1 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.611 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.870 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.871 12:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.871 12:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.871 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.871 12:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 nvme0n1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 nvme0n1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.807 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.065 nvme0n1 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.065 12:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.323 nvme0n1 00:28:14.323 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.323 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.323 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.323 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.323 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.324 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.583 nvme0n1 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.583 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.841 nvme0n1 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.841 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.842 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.100 nvme0n1 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:15.100 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.101 12:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 nvme0n1 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.360 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.634 nvme0n1 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.634 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 nvme0n1 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.892 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.149 nvme0n1 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.149 12:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.149 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.408 nvme0n1 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:16.408 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.666 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.925 nvme0n1 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.925 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 nvme0n1 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 12:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.213 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.499 nvme0n1 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.499 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.066 nvme0n1 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.066 12:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.325 nvme0n1 00:28:18.325 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.325 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.325 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.325 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.325 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.584 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 nvme0n1 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.151 12:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.716 nvme0n1 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.716 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.281 nvme0n1 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.281 12:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.281 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 nvme0n1 00:28:20.540 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzljMmQ2MzJmOGFkNzc1ZWVjN2MwMWM1ZTAwMzc1MTINNk7u: 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGUzMTU4NTU4ZDIxYTg4OGU2Yzg0YmE1YmE4NTAxMzdiNjdlYTcwZTZhYzEzMWU4MjMwNjU2ODc3MjA2ZGFkMJUFjeE=: 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.799 12:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.733 nvme0n1 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.733 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.734 12:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.299 nvme0n1 00:28:22.299 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjlhYzJlNmQwZTAzMDQ1NDk2MjgwYmQzM2FjNGRkMzJsD9a2: 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYwOWI3ZGQwZDk2MjljNDliNGY2NDk5ZWU2N2Q3YmV2zGgT: 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.558 12:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.493 nvme0n1 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGZlNDZkNThmZTAwYjlmMzE5OWNhNzAzMWQ4YjUwNjVmNGYxZTQyY2FlY2U2ZmMwlcLteg==: 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjBiMzdmYjAxYzgwZmM4NzNjYjIzOGMwMjI0MDk0MDOBaBuJ: 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.493 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.060 nvme0n1 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.060 12:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjMzYmJjZjg5MDIxYjRjMTRjYzAwZmU2YjQwYWU5MzJhNWU2MGM1NmI2ZTFmMGYzNjhmMTI1MDdkZjc2Yjc3NTVFFFM=: 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.318 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 nvme0n1 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYzOTI0ODBjNWFkMDhhODYxNWE3OGUzZWQ0ODZkZGI2NDY3YTE3MWZjZTc5NGY2RDdtSA==: 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: ]] 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmZlYmJjMDNiMDlhYjRmZDI4YjhlOGU2NmE5OWMxYWFlZjEyMTlmZDc1NTE5OTk27EwFvA==: 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.145 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 request: 00:28:25.146 { 00:28:25.146 "name": "nvme0", 00:28:25.146 "trtype": "tcp", 00:28:25.146 "traddr": "10.0.0.1", 00:28:25.146 "adrfam": "ipv4", 00:28:25.146 "trsvcid": "4420", 00:28:25.146 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.146 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.146 "prchk_reftag": false, 00:28:25.146 "prchk_guard": false, 00:28:25.146 "hdgst": false, 00:28:25.146 "ddgst": false, 00:28:25.146 "method": "bdev_nvme_attach_controller", 00:28:25.146 "req_id": 1 00:28:25.146 } 00:28:25.146 Got JSON-RPC error response 00:28:25.146 response: 00:28:25.146 { 00:28:25.146 "code": -5, 00:28:25.146 "message": "Input/output error" 00:28:25.146 } 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 12:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 request: 00:28:25.146 { 00:28:25.146 "name": "nvme0", 00:28:25.146 "trtype": "tcp", 00:28:25.146 "traddr": "10.0.0.1", 00:28:25.146 "adrfam": "ipv4", 00:28:25.146 "trsvcid": "4420", 00:28:25.146 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.146 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.146 "prchk_reftag": false, 00:28:25.146 "prchk_guard": false, 00:28:25.146 "hdgst": false, 00:28:25.146 "ddgst": false, 00:28:25.146 "dhchap_key": "key2", 00:28:25.146 "method": "bdev_nvme_attach_controller", 00:28:25.146 "req_id": 1 00:28:25.146 } 00:28:25.146 Got JSON-RPC error response 00:28:25.146 response: 00:28:25.146 { 00:28:25.146 "code": -5, 00:28:25.146 "message": "Input/output error" 00:28:25.146 } 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.146 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.147 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.406 request: 00:28:25.406 { 00:28:25.406 "name": "nvme0", 00:28:25.406 "trtype": "tcp", 00:28:25.406 "traddr": "10.0.0.1", 00:28:25.406 "adrfam": "ipv4", 00:28:25.406 "trsvcid": "4420", 00:28:25.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.406 "prchk_reftag": false, 00:28:25.406 "prchk_guard": false, 00:28:25.406 "hdgst": false, 00:28:25.406 "ddgst": false, 00:28:25.406 "dhchap_key": "key1", 00:28:25.406 "dhchap_ctrlr_key": "ckey2", 00:28:25.406 "method": "bdev_nvme_attach_controller", 00:28:25.406 "req_id": 1 00:28:25.406 } 00:28:25.406 Got JSON-RPC error response 00:28:25.406 response: 00:28:25.406 { 00:28:25.406 "code": -5, 00:28:25.406 "message": "Input/output error" 00:28:25.406 } 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.406 rmmod nvme_tcp 00:28:25.406 rmmod nvme_fabrics 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4085023 ']' 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4085023 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 4085023 ']' 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 4085023 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4085023 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4085023' 00:28:25.406 killing process with pid 4085023 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 4085023 00:28:25.406 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 4085023 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.665 12:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:27.568 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:27.827 12:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:30.362 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:30.362 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:31.301 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:31.561 12:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Gj9 /tmp/spdk.key-null.eae /tmp/spdk.key-sha256.wZ1 /tmp/spdk.key-sha384.JMT /tmp/spdk.key-sha512.qrM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:31.561 12:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:34.098 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:34.098 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:34.098 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:34.098 00:28:34.098 real 0m55.568s 00:28:34.098 user 0m50.432s 00:28:34.098 sys 0m12.513s 00:28:34.098 12:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:34.098 12:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.098 ************************************ 00:28:34.098 END TEST nvmf_auth_host 00:28:34.098 ************************************ 00:28:34.098 12:58:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:34.098 12:58:25 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:34.098 12:58:25 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:34.098 12:58:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:34.098 12:58:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.098 12:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:34.098 ************************************ 00:28:34.098 START TEST nvmf_digest 00:28:34.098 ************************************ 00:28:34.098 12:58:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:34.356 * Looking for test storage... 00:28:34.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.357 12:58:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:39.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:39.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:39.630 Found net devices under 0000:af:00.0: cvl_0_0 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:39.630 Found net devices under 0000:af:00.1: cvl_0_1 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.630 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:28:39.890 00:28:39.890 --- 10.0.0.2 ping statistics --- 00:28:39.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.890 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:28:39.890 00:28:39.890 --- 10.0.0.1 ping statistics --- 00:28:39.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.890 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 ************************************ 00:28:39.890 START TEST nvmf_digest_clean 00:28:39.890 ************************************ 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4099712 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4099712 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4099712 ']' 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.890 12:58:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 [2024-07-15 12:58:31.828001] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:39.890 [2024-07-15 12:58:31.828041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.149 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.149 [2024-07-15 12:58:31.891980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.149 [2024-07-15 12:58:31.981078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.149 [2024-07-15 12:58:31.981118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.149 [2024-07-15 12:58:31.981128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.149 [2024-07-15 12:58:31.981137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.149 [2024-07-15 12:58:31.981145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.149 [2024-07-15 12:58:31.981169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.149 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.149 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:40.149 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.149 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.149 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.408 null0 00:28:40.408 [2024-07-15 12:58:32.200293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.408 [2024-07-15 12:58:32.224474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4099737 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4099737 /var/tmp/bperf.sock 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4099737 ']' 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.408 12:58:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.408 [2024-07-15 12:58:32.308681] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:40.408 [2024-07-15 12:58:32.308788] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099737 ] 00:28:40.667 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.667 [2024-07-15 12:58:32.427767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.667 [2024-07-15 12:58:32.536490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.602 12:58:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.602 12:58:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:41.602 12:58:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:41.602 12:58:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:41.602 12:58:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.168 12:58:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.168 12:58:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.735 nvme0n1 00:28:42.992 12:58:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.992 12:58:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.992 Running I/O for 2 seconds... 00:28:45.523 00:28:45.523 Latency(us) 00:28:45.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.523 nvme0n1 : 2.01 14087.79 55.03 0.00 0.00 9072.01 5421.61 19303.33 00:28:45.523 =================================================================================================================== 00:28:45.523 Total : 14087.79 55.03 0.00 0.00 9072.01 5421.61 19303.33 00:28:45.523 0 00:28:45.523 12:58:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.523 12:58:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:45.523 12:58:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.523 12:58:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.523 | select(.opcode=="crc32c") 00:28:45.523 | "\(.module_name) \(.executed)"' 00:28:45.523 12:58:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4099737 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4099737 ']' 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4099737 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:45.523 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099737 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099737' 00:28:45.524 killing process with pid 4099737 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4099737 00:28:45.524 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.524 00:28:45.524 Latency(us) 00:28:45.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.524 =================================================================================================================== 00:28:45.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.524 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4099737 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4100777 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4100777 /var/tmp/bperf.sock 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4100777 ']' 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.783 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.784 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.784 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.784 12:58:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.784 [2024-07-15 12:58:37.527068] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:45.784 [2024-07-15 12:58:37.527131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100777 ] 00:28:45.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.784 Zero copy mechanism will not be used. 00:28:45.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.784 [2024-07-15 12:58:37.608285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.784 [2024-07-15 12:58:37.709337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.721 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.318 nvme0n1 00:28:47.318 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.318 12:58:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.318 Zero copy mechanism will not be used. 00:28:47.318 Running I/O for 2 seconds... 00:28:49.223 00:28:49.223 Latency(us) 00:28:49.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.223 nvme0n1 : 2.00 3691.55 461.44 0.00 0.00 4328.93 889.95 8996.31 00:28:49.223 =================================================================================================================== 00:28:49.223 Total : 3691.55 461.44 0.00 0.00 4328.93 889.95 8996.31 00:28:49.223 0 00:28:49.223 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.223 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:49.223 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.223 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.223 | select(.opcode=="crc32c") 00:28:49.223 | "\(.module_name) \(.executed)"' 00:28:49.223 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4100777 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4100777 ']' 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4100777 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:49.482 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100777 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100777' 00:28:49.741 killing process with pid 4100777 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4100777 00:28:49.741 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.741 00:28:49.741 Latency(us) 00:28:49.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.741 =================================================================================================================== 00:28:49.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4100777 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4101390 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4101390 /var/tmp/bperf.sock 00:28:49.741 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4101390 ']' 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:49.742 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:50.001 [2024-07-15 12:58:41.702127] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:50.001 [2024-07-15 12:58:41.702174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101390 ] 00:28:50.001 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.001 [2024-07-15 12:58:41.772114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.001 [2024-07-15 12:58:41.874983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.001 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.001 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:50.001 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.001 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.001 12:58:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.259 12:58:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.259 12:58:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.827 nvme0n1 00:28:50.827 12:58:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.827 12:58:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.827 Running I/O for 2 seconds... 00:28:52.834 00:28:52.834 Latency(us) 00:28:52.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.834 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.834 nvme0n1 : 2.01 18059.76 70.55 0.00 0.00 7069.96 3664.06 10426.18 00:28:52.834 =================================================================================================================== 00:28:52.834 Total : 18059.76 70.55 0.00 0.00 7069.96 3664.06 10426.18 00:28:52.834 0 00:28:53.104 12:58:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.104 12:58:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:53.104 12:58:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.104 12:58:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.104 | select(.opcode=="crc32c") 00:28:53.104 | "\(.module_name) \(.executed)"' 00:28:53.104 12:58:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4101390 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4101390 ']' 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4101390 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4101390 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4101390' 00:28:53.363 killing process with pid 4101390 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4101390 00:28:53.363 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.363 00:28:53.363 Latency(us) 00:28:53.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.363 =================================================================================================================== 00:28:53.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.363 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4101390 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4102119 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4102119 /var/tmp/bperf.sock 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4102119 ']' 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:53.622 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.622 [2024-07-15 12:58:45.369944] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:53.622 [2024-07-15 12:58:45.370006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102119 ] 00:28:53.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.622 Zero copy mechanism will not be used. 00:28:53.622 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.622 [2024-07-15 12:58:45.451106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.622 [2024-07-15 12:58:45.545833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.881 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.881 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:53.881 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:53.881 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:53.881 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.140 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.140 12:58:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.398 nvme0n1 00:28:54.398 12:58:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.398 12:58:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.398 Zero copy mechanism will not be used. 00:28:54.398 Running I/O for 2 seconds... 00:28:56.301 00:28:56.301 Latency(us) 00:28:56.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.301 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.302 nvme0n1 : 2.00 4541.66 567.71 0.00 0.00 3515.05 2710.81 8043.05 00:28:56.302 =================================================================================================================== 00:28:56.302 Total : 4541.66 567.71 0.00 0.00 3515.05 2710.81 8043.05 00:28:56.302 0 00:28:56.560 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.560 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:56.560 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.560 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.560 | select(.opcode=="crc32c") 00:28:56.560 | "\(.module_name) \(.executed)"' 00:28:56.560 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4102119 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4102119 ']' 00:28:56.818 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4102119 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102119 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102119' 00:28:56.819 killing process with pid 4102119 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4102119 00:28:56.819 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.819 00:28:56.819 Latency(us) 00:28:56.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.819 =================================================================================================================== 00:28:56.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.819 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4102119 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4099712 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4099712 ']' 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4099712 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099712 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099712' 00:28:57.077 killing process with pid 4099712 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4099712 00:28:57.077 12:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4099712 00:28:57.336 00:28:57.336 real 0m17.276s 00:28:57.336 user 0m35.391s 00:28:57.336 sys 0m4.300s 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.336 ************************************ 00:28:57.336 END TEST nvmf_digest_clean 00:28:57.336 ************************************ 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.336 ************************************ 00:28:57.336 START TEST nvmf_digest_error 00:28:57.336 ************************************ 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4102701 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4102701 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4102701 ']' 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.336 12:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.336 [2024-07-15 12:58:49.182132] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:57.336 [2024-07-15 12:58:49.182184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.336 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.336 [2024-07-15 12:58:49.267550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.594 [2024-07-15 12:58:49.356477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.594 [2024-07-15 12:58:49.356514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.594 [2024-07-15 12:58:49.356524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.594 [2024-07-15 12:58:49.356533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.594 [2024-07-15 12:58:49.356541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.594 [2024-07-15 12:58:49.356563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.529 [2024-07-15 12:58:50.415612] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.529 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.787 null0 00:28:58.787 [2024-07-15 12:58:50.515463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.787 [2024-07-15 12:58:50.539657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4102965 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4102965 /var/tmp/bperf.sock 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4102965 ']' 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.787 12:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.787 [2024-07-15 12:58:50.625841] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:28:58.787 [2024-07-15 12:58:50.625945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102965 ] 00:28:58.787 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.047 [2024-07-15 12:58:50.742598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.047 [2024-07-15 12:58:50.845860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.982 12:58:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.982 12:58:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:59.982 12:58:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.982 12:58:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.550 12:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.116 nvme0n1 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.116 12:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.374 Running I/O for 2 seconds... 00:29:01.374 [2024-07-15 12:58:53.281144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.374 [2024-07-15 12:58:53.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.374 [2024-07-15 12:58:53.281212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.374 [2024-07-15 12:58:53.296681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.374 [2024-07-15 12:58:53.296720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.374 [2024-07-15 12:58:53.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.316837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.316873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.316890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.333008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.333042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.333058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.348341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.348373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.348389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.363203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.363236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.363251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.376669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.376702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.376717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.391506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.391539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.391561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.405942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.405975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.405990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.426143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.426175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.426190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.447137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.447171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.447185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.468589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.468622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.468637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.483315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.483346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.483362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.503910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.503942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.503957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.525556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.525589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.525604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.543659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.543692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.543707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.632 [2024-07-15 12:58:53.559020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.632 [2024-07-15 12:58:53.559059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.632 [2024-07-15 12:58:53.559074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.580408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.580444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.580460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.600191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.600225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.615752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.615785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.615801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.637153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.637187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.637201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.658196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.658230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.658245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.676873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.676907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.676922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.693772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.693806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.693820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.711531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.711566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.711581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.731545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.731579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.731595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.753741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.753776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.931 [2024-07-15 12:58:53.753791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.931 [2024-07-15 12:58:53.772870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.931 [2024-07-15 12:58:53.772902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.772918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.932 [2024-07-15 12:58:53.788207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.932 [2024-07-15 12:58:53.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.932 [2024-07-15 12:58:53.805743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.932 [2024-07-15 12:58:53.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.805791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.932 [2024-07-15 12:58:53.826873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.932 [2024-07-15 12:58:53.826906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.826922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.932 [2024-07-15 12:58:53.842298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.932 [2024-07-15 12:58:53.842330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.842345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.932 [2024-07-15 12:58:53.863035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:01.932 [2024-07-15 12:58:53.863068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.932 [2024-07-15 12:58:53.863083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.190 [2024-07-15 12:58:53.878344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.190 [2024-07-15 12:58:53.878376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.190 [2024-07-15 12:58:53.878397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.190 [2024-07-15 12:58:53.897982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.190 [2024-07-15 12:58:53.898015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.190 [2024-07-15 12:58:53.898030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.916312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.916345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.930789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.930821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.947463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.947496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.947510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.962625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.962658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.962672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.977526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.977559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.977574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:53.993429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:53.993461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:53.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.013111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.013144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.013160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.030802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.030840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.030856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.044827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.044860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.044876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.062814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.062848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.062863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.076839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.076871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.076887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.090929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.090961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.090976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.109031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.109064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.109079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.191 [2024-07-15 12:58:54.125842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.191 [2024-07-15 12:58:54.125874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.191 [2024-07-15 12:58:54.125889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.140417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.161673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.161707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.161721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.182627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.182660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.182675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.196413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.196445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.196460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.215929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.215961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.215976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.235063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.235095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.235111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.250783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.250815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.250829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.270283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.270316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.270331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.289956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.289988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.290003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.305558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.305590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.305605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.328042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.328080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.328096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.342693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.342725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.342740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.362069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.362116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.449 [2024-07-15 12:58:54.377579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.449 [2024-07-15 12:58:54.377610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.449 [2024-07-15 12:58:54.377626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.396532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.396566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.396582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.413555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.413589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.413605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.430566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.430599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.430615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.449006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.449039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.449055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.463662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.463693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.463709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.483882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.483914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.483929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.502211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.502243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.707 [2024-07-15 12:58:54.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.707 [2024-07-15 12:58:54.518569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.707 [2024-07-15 12:58:54.518600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.538942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.538975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.538990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.559035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.559082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.575452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.575484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.594825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.594857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.594872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.612838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.612870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.612885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.629175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.629207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.629227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.708 [2024-07-15 12:58:54.646482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.708 [2024-07-15 12:58:54.646514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.708 [2024-07-15 12:58:54.646529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.966 [2024-07-15 12:58:54.664123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.966 [2024-07-15 12:58:54.664156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.966 [2024-07-15 12:58:54.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.966 [2024-07-15 12:58:54.679438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.966 [2024-07-15 12:58:54.679471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.966 [2024-07-15 12:58:54.679485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.966 [2024-07-15 12:58:54.699205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.966 [2024-07-15 12:58:54.699237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.966 [2024-07-15 12:58:54.699252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.966 [2024-07-15 12:58:54.718362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.966 [2024-07-15 12:58:54.718394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.966 [2024-07-15 12:58:54.718409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.966 [2024-07-15 12:58:54.733348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.966 [2024-07-15 12:58:54.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.966 [2024-07-15 12:58:54.733396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.755831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.755864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.755879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.774364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.774397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.774412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.789236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.789284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.810273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.810306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.810321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.832270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.832303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.832319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.846481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.846513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.866350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.866382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.866397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.885618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.885665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.967 [2024-07-15 12:58:54.901156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:02.967 [2024-07-15 12:58:54.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.967 [2024-07-15 12:58:54.901201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:54.921462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:54.921495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:54.921510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:54.937595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:54.937626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:54.937641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:54.954049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:54.954081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:54.954095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:54.974219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:54.974251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:54.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:54.995637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:54.995669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:54.995684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.011120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.025630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.025662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.025676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.040192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.040225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.040239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.056401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.056441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.056456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.076482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.076529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.092997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.093055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.112102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.112136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.112150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.130466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.130500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.130515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.145249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.145304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.227 [2024-07-15 12:58:55.159886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.227 [2024-07-15 12:58:55.159919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.227 [2024-07-15 12:58:55.159935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.485 [2024-07-15 12:58:55.176524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.485 [2024-07-15 12:58:55.176558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.485 [2024-07-15 12:58:55.176574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.485 [2024-07-15 12:58:55.196883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.485 [2024-07-15 12:58:55.196917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.485 [2024-07-15 12:58:55.196933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.485 [2024-07-15 12:58:55.210591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.485 [2024-07-15 12:58:55.210623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.485 [2024-07-15 12:58:55.210639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.485 [2024-07-15 12:58:55.231161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.485 [2024-07-15 12:58:55.231194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.485 [2024-07-15 12:58:55.231209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.485 [2024-07-15 12:58:55.248480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e5580) 00:29:03.485 [2024-07-15 12:58:55.248512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.486 [2024-07-15 12:58:55.248528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.486 00:29:03.486 Latency(us) 00:29:03.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.486 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:03.486 nvme0n1 : 2.01 14362.94 56.11 0.00 0.00 8896.70 5183.30 28955.00 00:29:03.486 =================================================================================================================== 00:29:03.486 Total : 14362.94 56.11 0.00 0.00 8896.70 5183.30 28955.00 00:29:03.486 0 00:29:03.486 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.486 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.486 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.486 | .driver_specific 00:29:03.486 | .nvme_error 00:29:03.486 | .status_code 00:29:03.486 | .command_transient_transport_error' 00:29:03.486 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4102965 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4102965 ']' 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4102965 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102965 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102965' 00:29:03.745 killing process with pid 4102965 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4102965 00:29:03.745 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.745 00:29:03.745 Latency(us) 00:29:03.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.745 =================================================================================================================== 00:29:03.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.745 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4102965 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4104013 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4104013 /var/tmp/bperf.sock 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4104013 ']' 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.004 12:58:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.004 [2024-07-15 12:58:55.888240] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:04.004 [2024-07-15 12:58:55.888361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104013 ] 00:29:04.004 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.004 Zero copy mechanism will not be used. 00:29:04.263 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.263 [2024-07-15 12:58:56.004017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.263 [2024-07-15 12:58:56.102552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.198 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.198 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:05.198 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.198 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.765 12:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.334 nvme0n1 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:06.334 12:58:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.334 Zero copy mechanism will not be used. 00:29:06.334 Running I/O for 2 seconds... 00:29:06.334 [2024-07-15 12:58:58.205613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.205662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.205681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.213898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.213936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.213952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.222670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.222706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.222722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.231377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.231428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.239732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.239767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.239782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.248395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.248429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.248444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.256826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.256859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.256874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.265003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.265036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.265050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.334 [2024-07-15 12:58:58.273221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.334 [2024-07-15 12:58:58.273263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.334 [2024-07-15 12:58:58.273285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.594 [2024-07-15 12:58:58.282055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.594 [2024-07-15 12:58:58.282090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.594 [2024-07-15 12:58:58.282105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.594 [2024-07-15 12:58:58.291108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.594 [2024-07-15 12:58:58.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.594 [2024-07-15 12:58:58.291158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.594 [2024-07-15 12:58:58.300247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.594 [2024-07-15 12:58:58.300301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.594 [2024-07-15 12:58:58.300316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.594 [2024-07-15 12:58:58.308982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.594 [2024-07-15 12:58:58.309016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.594 [2024-07-15 12:58:58.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.317725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.317758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.317773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.326195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.326230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.326245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.334721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.334755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.334770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.343163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.343195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.343210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.351379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.351411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.351426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.359339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.359372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.359387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.367265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.367298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.367313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.375505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.375537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.375552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.384282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.384325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.384342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.393054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.393088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.393104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.401478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.401510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.410311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.410361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.419389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.419422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.427628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.427662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.427677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.435732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.435765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.435780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.444059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.444092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.444107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.452219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.452252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.452277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.460631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.460664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.460678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.468875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.468908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.468923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.477223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.477265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.477280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.485842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.485875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.485890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.494271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.494310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.494326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.502934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.502965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.502981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.511283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.511315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.511331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.519451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.519484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.519499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.595 [2024-07-15 12:58:58.527542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.595 [2024-07-15 12:58:58.527576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.595 [2024-07-15 12:58:58.527592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.535489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.535521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.543570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.543602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.543618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.552293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.552339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.560741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.560788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.569164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.569196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.569211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.577707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.577738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.577754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.585745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.585792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.594376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.594409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.594424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.602773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.602805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.602819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.611038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.611070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.611085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.619377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.619409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.627932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.627964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.627979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.636458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.636492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.636513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.644821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.644852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.855 [2024-07-15 12:58:58.644867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.855 [2024-07-15 12:58:58.653379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.855 [2024-07-15 12:58:58.653410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.653424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.661798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.661830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.661845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.670205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.670237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.678790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.678823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.678839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.686957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.686989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.687004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.695421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.695453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.695467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.703969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.704000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.704015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.712712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.712751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.712766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.723551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.723583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.723599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.732211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.732246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.732270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.742195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.751351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.751385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.751401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.760176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.760211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.760227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.769022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.769056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.769072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.779185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.779220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.779235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.856 [2024-07-15 12:58:58.789087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:06.856 [2024-07-15 12:58:58.789122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.856 [2024-07-15 12:58:58.789137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.798403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.798439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.798456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.808110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.808145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.808160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.818172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.818208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.818224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.828335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.828385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.838822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.838857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.838873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.848657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.848693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.848709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.857927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.857961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.857977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.866610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.866644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.866659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.875134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.875169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.875189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.883551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.883586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.883601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.892120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.892154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.892169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.900578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.900611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.900627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.910240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.910286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.910302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.919513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.919547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.919564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.925709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.925743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.925758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.933403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.933453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.941682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.941717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.941732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.950122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.950156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.950171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.958974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.150 [2024-07-15 12:58:58.959009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-07-15 12:58:58.959025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-07-15 12:58:58.967634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:58.967668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:58.967683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:58.976777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:58.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:58.976828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:58.986194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:58.986229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:58.986244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:58.995781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:58.995815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:58.995832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.006836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.006872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.017428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.017464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.017480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.027815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.027850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.027871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.038005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.038041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.048049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.048085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.054973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.055007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.055023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.064174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.064210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.064226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.074348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.074384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.074401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.151 [2024-07-15 12:58:59.085049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.151 [2024-07-15 12:58:59.085084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-07-15 12:58:59.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.095847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.095900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.105140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.105175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.114457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.114495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.114511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.124261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.124296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.124312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.134277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.134312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.134328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.143733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.143768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.143783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.152557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.410 [2024-07-15 12:58:59.152591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.410 [2024-07-15 12:58:59.152607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.410 [2024-07-15 12:58:59.161313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.161346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.161361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.170755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.170789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.170803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.179706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.179740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.179755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.189180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.189214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.189229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.198969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.199003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.199019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.207948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.207981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.207996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.217733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.217768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.217784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.226511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.226547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.226562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.234905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.234940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.234956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.243437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.243473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.243488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.252591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.252626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.252641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.261143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.261177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.261192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.269725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.269759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.269779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.278238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.278280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.278296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.287001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.287036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.287051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.295362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.295395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.295410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.304019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.304069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.312931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.312965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.312979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.321265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.321298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.321313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.329746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.329780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.329795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.338358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.338392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.338406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-07-15 12:58:59.346664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.411 [2024-07-15 12:58:59.346704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-07-15 12:58:59.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.355086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.355120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.355135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.363671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.363704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.363719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.372440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.372475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.372490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.381175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.381208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.381223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.389753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.389787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.389803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.399436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.399471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.399488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.408195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.408227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-07-15 12:58:59.408242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-07-15 12:58:59.416490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.671 [2024-07-15 12:58:59.416524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.416539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.425568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.425619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.434358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.434393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.434409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.443056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.443090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.443105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.451604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.451637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.451653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.460210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.460245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.460269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.468445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.468480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.468495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.476705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.476739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.476754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.484985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.485018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.485033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.493000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.493036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.493058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.501235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.501297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.509229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.509271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.509287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.517302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.517335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.517349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.525368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.525416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.533779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.533812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.542365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.542399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.542414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.550931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.550964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.550979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.559062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.559095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.559110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.567664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.567703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.576472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.576506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.576521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.585020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.585054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.585069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.593631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.593666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.593681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.602052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.602086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.602101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-07-15 12:58:59.610378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.672 [2024-07-15 12:58:59.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-07-15 12:58:59.610428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.618986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.619020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.619035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.627464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.627497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.627512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.635944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.635978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.635992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.644082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.644115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.644130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.652839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.652873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.652889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.661532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.661566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.661581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.670270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.670304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.670321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.678864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.678898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.678914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.687283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.687316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.687332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.695640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.695674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.695690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.704154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.704187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.712362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.712396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.712416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.720428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.720461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.728447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.728481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.728496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.736856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.736889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.736903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.745427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.745462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.745477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.753928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.753961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.753976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.762066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.762099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.762114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.770748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.770781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.770796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.779126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.779160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.779175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.787639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.787672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.787688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.796322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.796356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.796371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.804793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.804829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.804843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.812990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.813024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.813039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.821473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.821506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.821521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.829824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.829858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.933 [2024-07-15 12:58:59.829873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.933 [2024-07-15 12:58:59.838170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.933 [2024-07-15 12:58:59.838204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.934 [2024-07-15 12:58:59.838220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.934 [2024-07-15 12:58:59.846741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.934 [2024-07-15 12:58:59.846775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.934 [2024-07-15 12:58:59.846790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.934 [2024-07-15 12:58:59.855426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.934 [2024-07-15 12:58:59.855462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.934 [2024-07-15 12:58:59.855483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.934 [2024-07-15 12:58:59.863954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:07.934 [2024-07-15 12:58:59.863988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.934 [2024-07-15 12:58:59.864003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.194 [2024-07-15 12:58:59.872648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.194 [2024-07-15 12:58:59.872683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.194 [2024-07-15 12:58:59.872699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.194 [2024-07-15 12:58:59.881063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.194 [2024-07-15 12:58:59.881098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.194 [2024-07-15 12:58:59.881112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.194 [2024-07-15 12:58:59.889385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.889420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.889436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.897847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.897881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.897896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.905916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.905951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.905967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.914892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.914926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.914941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.922973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.923007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.923022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.930969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.931008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.931023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.939243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.939288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.939303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.947713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.947746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.947762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.956058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.956094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.956109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.964822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.964856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.964872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.972967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.973001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.973016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.980970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.981003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.981020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.989172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.989205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:58:59.997241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:58:59.997285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:58:59.997300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.005734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.005768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:59:00.005784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.013963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.013996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:59:00.014011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.022147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.022181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:59:00.022195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.030227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.030269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:59:00.030284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.038333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.038367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.195 [2024-07-15 12:59:00.038383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.195 [2024-07-15 12:59:00.046404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.195 [2024-07-15 12:59:00.046437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.046453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.054448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.054481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.062674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.062707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.062722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.071277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.071330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.079922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.079954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.087998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.096430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.096476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.104681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.104713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.104728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.112937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.112970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.112985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.121379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.121412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.121426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.196 [2024-07-15 12:59:00.129523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.196 [2024-07-15 12:59:00.129555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.196 [2024-07-15 12:59:00.129570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.455 [2024-07-15 12:59:00.138001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.455 [2024-07-15 12:59:00.138034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.455 [2024-07-15 12:59:00.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.455 [2024-07-15 12:59:00.146337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.455 [2024-07-15 12:59:00.146376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.455 [2024-07-15 12:59:00.146391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.455 [2024-07-15 12:59:00.154795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.455 [2024-07-15 12:59:00.154827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.455 [2024-07-15 12:59:00.154841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.455 [2024-07-15 12:59:00.163435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.455 [2024-07-15 12:59:00.163468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.456 [2024-07-15 12:59:00.163483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.456 [2024-07-15 12:59:00.172042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.456 [2024-07-15 12:59:00.172074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.456 [2024-07-15 12:59:00.172089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.456 [2024-07-15 12:59:00.180690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.456 [2024-07-15 12:59:00.180725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.456 [2024-07-15 12:59:00.180739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.456 [2024-07-15 12:59:00.189111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.456 [2024-07-15 12:59:00.189146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.456 [2024-07-15 12:59:00.189161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.456 [2024-07-15 12:59:00.197840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc97490) 00:29:08.456 [2024-07-15 12:59:00.197874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.456 [2024-07-15 12:59:00.197890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.456 00:29:08.456 Latency(us) 00:29:08.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.456 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:08.456 nvme0n1 : 2.00 3586.31 448.29 0.00 0.00 4455.16 1169.22 10962.39 00:29:08.456 =================================================================================================================== 00:29:08.456 Total : 3586.31 448.29 0.00 0.00 4455.16 1169.22 10962.39 00:29:08.456 0 00:29:08.456 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:08.456 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:08.456 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:08.456 | .driver_specific 00:29:08.456 | .nvme_error 00:29:08.456 | .status_code 00:29:08.456 | .command_transient_transport_error' 00:29:08.456 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4104013 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4104013 ']' 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4104013 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104013 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104013' 00:29:08.716 killing process with pid 4104013 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4104013 00:29:08.716 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.716 00:29:08.716 Latency(us) 00:29:08.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.716 =================================================================================================================== 00:29:08.716 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.716 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4104013 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4104814 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4104814 /var/tmp/bperf.sock 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4104814 ']' 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.975 12:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.975 [2024-07-15 12:59:00.799000] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:08.975 [2024-07-15 12:59:00.799049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104814 ] 00:29:08.975 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.975 [2024-07-15 12:59:00.869412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.234 [2024-07-15 12:59:00.973879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.493 12:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.431 nvme0n1 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:10.431 12:59:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.431 Running I/O for 2 seconds... 00:29:10.431 [2024-07-15 12:59:02.271799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190edd58 00:29:10.431 [2024-07-15 12:59:02.273010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.273055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.284875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fa7d8 00:29:10.431 [2024-07-15 12:59:02.286024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.286058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.300699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fc128 00:29:10.431 [2024-07-15 12:59:02.302051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.302081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.315455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eea00 00:29:10.431 [2024-07-15 12:59:02.316985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.317015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.328409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e5a90 00:29:10.431 [2024-07-15 12:59:02.330109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.330139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.341582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f81e0 00:29:10.431 [2024-07-15 12:59:02.342516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.342545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.356340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ea248 00:29:10.431 [2024-07-15 12:59:02.357478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.431 [2024-07-15 12:59:02.357508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.431 [2024-07-15 12:59:02.370819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e01f8 00:29:10.691 [2024-07-15 12:59:02.372472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.691 [2024-07-15 12:59:02.372503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.385404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e99d8 00:29:10.692 [2024-07-15 12:59:02.386686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.386716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.399826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eaab8 00:29:10.692 [2024-07-15 12:59:02.401655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.401683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.414395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ee5c8 00:29:10.692 [2024-07-15 12:59:02.415870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.415900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.427402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e6738 00:29:10.692 [2024-07-15 12:59:02.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.443233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e12d8 00:29:10.692 [2024-07-15 12:59:02.444903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.444934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.457981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ed920 00:29:10.692 [2024-07-15 12:59:02.459829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.459858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.470995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f6890 00:29:10.692 [2024-07-15 12:59:02.472797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.472826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.482523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e23b8 00:29:10.692 [2024-07-15 12:59:02.483552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.483582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.498348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f6020 00:29:10.692 [2024-07-15 12:59:02.499582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.499612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.514262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f8a50 00:29:10.692 [2024-07-15 12:59:02.516219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.516248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.527508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e4578 00:29:10.692 [2024-07-15 12:59:02.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.528985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.541222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e4578 00:29:10.692 [2024-07-15 12:59:02.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.542652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.555881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f1430 00:29:10.692 [2024-07-15 12:59:02.557770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.557806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.571904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f5be8 00:29:10.692 [2024-07-15 12:59:02.574231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.574265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.583433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e99d8 00:29:10.692 [2024-07-15 12:59:02.584973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.585001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.596761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e23b8 00:29:10.692 [2024-07-15 12:59:02.597776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.597805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.610988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f7538 00:29:10.692 [2024-07-15 12:59:02.612498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.612528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:10.692 [2024-07-15 12:59:02.627051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ecc78 00:29:10.692 [2024-07-15 12:59:02.628978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.692 [2024-07-15 12:59:02.629007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.640370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f7538 00:29:10.951 [2024-07-15 12:59:02.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.654625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ee190 00:29:10.951 [2024-07-15 12:59:02.656494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.656524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.670686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e9e10 00:29:10.951 [2024-07-15 12:59:02.672960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.672989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.683983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ee190 00:29:10.951 [2024-07-15 12:59:02.685716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.685746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.696743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190df550 00:29:10.951 [2024-07-15 12:59:02.698624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.698654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.709970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ff3c8 00:29:10.951 [2024-07-15 12:59:02.711091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.711121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.725989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eaab8 00:29:10.951 [2024-07-15 12:59:02.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.727893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.737619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e6300 00:29:10.951 [2024-07-15 12:59:02.738615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.738646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.755165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e0ea0 00:29:10.951 [2024-07-15 12:59:02.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.766703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e6300 00:29:10.951 [2024-07-15 12:59:02.767920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.951 [2024-07-15 12:59:02.767949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:10.951 [2024-07-15 12:59:02.782695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190dece0 00:29:10.952 [2024-07-15 12:59:02.784042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.784073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.797415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f9f68 00:29:10.952 [2024-07-15 12:59:02.798966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.798995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.810355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eff18 00:29:10.952 [2024-07-15 12:59:02.812089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.812119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.823512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e5658 00:29:10.952 [2024-07-15 12:59:02.824475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.824505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.838222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ec408 00:29:10.952 [2024-07-15 12:59:02.839382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.839411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.852633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e9168 00:29:10.952 [2024-07-15 12:59:02.854349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.868337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190feb58 00:29:10.952 [2024-07-15 12:59:02.870202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.870231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:10.952 [2024-07-15 12:59:02.879795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f20d8 00:29:10.952 [2024-07-15 12:59:02.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.952 [2024-07-15 12:59:02.880889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.895287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fb8b8 00:29:11.212 [2024-07-15 12:59:02.896974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.897004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.908719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e38d0 00:29:11.212 [2024-07-15 12:59:02.909955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.909985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.926189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f8a50 00:29:11.212 [2024-07-15 12:59:02.928400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.928434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.937671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ff3c8 00:29:11.212 [2024-07-15 12:59:02.939077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.939105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.955158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190edd58 00:29:11.212 [2024-07-15 12:59:02.957534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.957563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.966607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fe720 00:29:11.212 [2024-07-15 12:59:02.968180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.968209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.980707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e73e0 00:29:11.212 [2024-07-15 12:59:02.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:02.982484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:02.997853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fe2e8 00:29:11.212 [2024-07-15 12:59:03.000177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.000206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.009337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190efae0 00:29:11.212 [2024-07-15 12:59:03.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.010896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.023426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.212 [2024-07-15 12:59:03.025108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.025137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.038281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190ef270 00:29:11.212 [2024-07-15 12:59:03.039974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.040003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.051502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f7100 00:29:11.212 [2024-07-15 12:59:03.052623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.052652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.064968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f31b8 00:29:11.212 [2024-07-15 12:59:03.066267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.080476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f7100 00:29:11.212 [2024-07-15 12:59:03.082393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.082423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.093902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190dece0 00:29:11.212 [2024-07-15 12:59:03.095377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.095405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.107099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e9168 00:29:11.212 [2024-07-15 12:59:03.107987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.108016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.120548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f6890 00:29:11.212 [2024-07-15 12:59:03.121611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.121640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:11.212 [2024-07-15 12:59:03.136036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e3d08 00:29:11.212 [2024-07-15 12:59:03.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.212 [2024-07-15 12:59:03.137750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:11.471 [2024-07-15 12:59:03.151688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f4298 00:29:11.472 [2024-07-15 12:59:03.153495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.153523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.163132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190de8a8 00:29:11.472 [2024-07-15 12:59:03.164147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.164175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.180612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e99d8 00:29:11.472 [2024-07-15 12:59:03.182587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.192055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190de8a8 00:29:11.472 [2024-07-15 12:59:03.193238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.193272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.207536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e38d0 00:29:11.472 [2024-07-15 12:59:03.209352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.209382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.222004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e7c50 00:29:11.472 [2024-07-15 12:59:03.223360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.223388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.236461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f0788 00:29:11.472 [2024-07-15 12:59:03.238445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.250931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.472 [2024-07-15 12:59:03.252470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.252498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.264877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.472 [2024-07-15 12:59:03.266404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.266433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.278755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.472 [2024-07-15 12:59:03.280287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.280316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.292650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.472 [2024-07-15 12:59:03.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.294213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.306521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e88f8 00:29:11.472 [2024-07-15 12:59:03.308052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.308082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.319471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e4de8 00:29:11.472 [2024-07-15 12:59:03.320986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.321015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.332678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.333617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.346419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.347355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.347384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.360301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.361228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.374172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.375130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.388045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.388978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.389007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.472 [2024-07-15 12:59:03.401955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eee38 00:29:11.472 [2024-07-15 12:59:03.402897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.472 [2024-07-15 12:59:03.402926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.414880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190f7100 00:29:11.731 [2024-07-15 12:59:03.415808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.415836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.430410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.431902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.431931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.445974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190eaef0 00:29:11.731 [2024-07-15 12:59:03.447636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.447664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.459170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.460245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.460280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.472910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.473997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.474025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.486799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.487884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.487912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.500681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.501758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.501786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.514592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.515701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.528449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.529553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.542313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.543393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.543422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.556423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.557503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.557533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.570302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.571379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.571409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.584194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.585291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.585321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.598113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.599192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.599220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.611971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.613054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.613084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.625869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.626949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.626977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.639743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.640818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.640848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.653613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.654694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.654728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.731 [2024-07-15 12:59:03.667488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.731 [2024-07-15 12:59:03.668566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.731 [2024-07-15 12:59:03.668595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.681377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.682489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.695260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.696366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.709138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.710223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.710252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.723021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.724096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.724125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.736900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.737979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.738007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.750947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.752031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.752061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.764812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.765897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.765926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.778688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.779776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.779807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.792571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.793651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.793681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.806444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.807523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.807553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.820329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.821414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.821443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.834195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.835277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.848097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.849174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.849203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.991 [2024-07-15 12:59:03.861993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.991 [2024-07-15 12:59:03.863074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.991 [2024-07-15 12:59:03.863104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.992 [2024-07-15 12:59:03.875868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.992 [2024-07-15 12:59:03.876950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.992 [2024-07-15 12:59:03.876979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.992 [2024-07-15 12:59:03.889780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.992 [2024-07-15 12:59:03.890855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.992 [2024-07-15 12:59:03.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.992 [2024-07-15 12:59:03.903666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.992 [2024-07-15 12:59:03.904749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.992 [2024-07-15 12:59:03.904780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.992 [2024-07-15 12:59:03.917533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:11.992 [2024-07-15 12:59:03.918609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.992 [2024-07-15 12:59:03.918639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:03.931431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:03.932512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:03.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:03.945302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:03.946388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:03.946417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:03.959179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:03.960259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:03.960288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:03.973053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:03.974131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:03.974160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:03.986924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:03.988004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:03.988032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.000812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.001892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.001922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.014698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.015809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.028565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.029654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.029682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.042449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.043529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.043558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.056322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.057409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.057439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.070194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.071282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.071311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.084070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.085146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.085175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.097967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.099048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.099076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.111850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.112934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.112964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.125731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.126808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.126838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.139605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.140691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.251 [2024-07-15 12:59:04.140720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.251 [2024-07-15 12:59:04.153496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.251 [2024-07-15 12:59:04.154571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.252 [2024-07-15 12:59:04.154600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.252 [2024-07-15 12:59:04.167375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.252 [2024-07-15 12:59:04.168461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.252 [2024-07-15 12:59:04.168489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.252 [2024-07-15 12:59:04.181279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.252 [2024-07-15 12:59:04.182362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.252 [2024-07-15 12:59:04.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 [2024-07-15 12:59:04.195177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.511 [2024-07-15 12:59:04.196261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.511 [2024-07-15 12:59:04.196291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 [2024-07-15 12:59:04.209062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.511 [2024-07-15 12:59:04.210148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.511 [2024-07-15 12:59:04.210178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 [2024-07-15 12:59:04.222983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.511 [2024-07-15 12:59:04.224069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.511 [2024-07-15 12:59:04.224098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 [2024-07-15 12:59:04.236992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.511 [2024-07-15 12:59:04.238079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.511 [2024-07-15 12:59:04.238107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 [2024-07-15 12:59:04.250881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190e49b0 00:29:12.511 [2024-07-15 12:59:04.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.511 [2024-07-15 12:59:04.251990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.511 00:29:12.511 Latency(us) 00:29:12.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.511 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.511 nvme0n1 : 2.01 18147.67 70.89 0.00 0.00 7039.54 3559.80 18469.24 00:29:12.511 =================================================================================================================== 00:29:12.511 Total : 18147.67 70.89 0.00 0.00 7039.54 3559.80 18469.24 00:29:12.511 0 00:29:12.511 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:12.511 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:12.511 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:12.511 | .driver_specific 00:29:12.511 | .nvme_error 00:29:12.511 | .status_code 00:29:12.511 | .command_transient_transport_error' 00:29:12.511 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4104814 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4104814 ']' 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4104814 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104814 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104814' 00:29:12.770 killing process with pid 4104814 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4104814 00:29:12.770 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.770 00:29:12.770 Latency(us) 00:29:12.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.770 =================================================================================================================== 00:29:12.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.770 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4104814 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4105600 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4105600 /var/tmp/bperf.sock 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4105600 ']' 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.029 12:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.029 [2024-07-15 12:59:04.866997] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:13.029 [2024-07-15 12:59:04.867060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105600 ] 00:29:13.029 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.029 Zero copy mechanism will not be used. 00:29:13.029 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.029 [2024-07-15 12:59:04.948828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.288 [2024-07-15 12:59:05.043433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.288 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.288 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:13.288 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.288 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.547 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.116 nvme0n1 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:14.116 12:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.116 Zero copy mechanism will not be used. 00:29:14.116 Running I/O for 2 seconds... 00:29:14.116 [2024-07-15 12:59:06.046646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.116 [2024-07-15 12:59:06.047182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.116 [2024-07-15 12:59:06.047224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.116 [2024-07-15 12:59:06.053042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.116 [2024-07-15 12:59:06.053575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.116 [2024-07-15 12:59:06.053611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.059321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.059845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.059878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.065469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.065994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.066027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.071711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.072249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.077773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.078315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.078346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.083862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.084397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.089944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.090477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.090508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.096213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.096742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.096773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.102363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.102881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.102917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.108620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.109127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.109159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.114755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.377 [2024-07-15 12:59:06.115295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.377 [2024-07-15 12:59:06.115326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.377 [2024-07-15 12:59:06.120832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.121365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.126930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.127466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.127497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.132992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.133524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.133554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.139034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.139566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.139596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.145097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.145625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.145656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.151110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.151641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.151671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.157205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.157727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.157758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.163252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.163804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.163834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.169440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.169965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.169995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.175485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.176013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.176044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.181561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.182083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.182114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.187639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.188156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.188186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.193770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.194295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.194325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.199885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.200420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.200450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.205989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.206521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.206558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.212047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.212576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.212606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.218169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.218701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.218732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.224210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.224742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.224771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.230301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.230833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.230862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.236334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.236858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.236888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.242425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.242945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.242976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.248587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.249098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.249128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.254673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.255218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.260773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.261296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.261326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.266855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.267421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.272823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.273363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.273393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.278839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.279368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.279399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.284901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.285472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.378 [2024-07-15 12:59:06.291111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.378 [2024-07-15 12:59:06.291635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.378 [2024-07-15 12:59:06.291666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.379 [2024-07-15 12:59:06.297141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.379 [2024-07-15 12:59:06.297655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.379 [2024-07-15 12:59:06.297685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.379 [2024-07-15 12:59:06.303323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.379 [2024-07-15 12:59:06.303825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.379 [2024-07-15 12:59:06.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.379 [2024-07-15 12:59:06.309909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.379 [2024-07-15 12:59:06.310446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.379 [2024-07-15 12:59:06.310477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.316539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.317064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.317095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.324626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.325148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.325177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.332802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.333344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.341146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.341659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.341690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.349473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.350020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.357857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.358386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.358417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.366219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.366744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.366774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.374249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.374759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.374789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.382171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.382725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.390160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.390690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.390720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.398370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.398887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.398918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.406601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.407134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.407164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.415117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.415675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.423460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.423977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.424007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.430594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.431112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.639 [2024-07-15 12:59:06.431142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.639 [2024-07-15 12:59:06.438683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.639 [2024-07-15 12:59:06.439203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.439233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.446932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.447462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.447492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.453873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.454409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.454440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.460370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.460887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.460916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.466617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.467136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.467167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.472783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.473302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.473333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.479416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.479924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.479954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.486503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.487009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.487038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.493863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.493954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.493982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.501051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.501573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.501603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.507528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.508051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.508082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.514056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.514583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.514613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.520623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.521143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.521173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.527226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.527754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.527785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.533701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.534205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.534234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.540818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.541347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.541376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.548403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.548927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.548958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.555458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.555964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.555994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.562031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.562564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.562594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.569085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.569611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.569646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.640 [2024-07-15 12:59:06.575669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.640 [2024-07-15 12:59:06.576175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.640 [2024-07-15 12:59:06.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.582322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.582838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.582868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.590170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.590735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.597695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.598213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.598243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.604599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.605116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.605147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.610818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.611337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.611367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.617026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.617542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.617572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.623205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.623726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.623756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.629882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.630411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.630441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.636123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.636644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.636674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.642349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.642883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.648443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.648966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.648995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.654576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.655129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.661005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.661530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.661561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.667219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.667755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.900 [2024-07-15 12:59:06.667785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.900 [2024-07-15 12:59:06.673335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.900 [2024-07-15 12:59:06.673855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.679415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.679921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.685843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.686367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.686397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.693282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.693827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.693857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.700595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.701104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.701134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.707668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.708193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.708222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.714196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.714717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.714747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.720513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.721031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.721060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.726819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.727333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.727363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.733022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.733537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.733567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.739160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.739699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.745385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.745915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.745944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.751619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.752143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.752173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.758101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.758627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.758657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.764413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.764918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.764947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.770474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.770952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.770982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.776370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.776867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.776897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.782266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.782769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.782799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.788476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.789000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.795546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.796041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.796071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.803608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.804207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.804236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.811746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.812249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.812286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.819639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.820149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.820178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.827507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.827986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.828016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.901 [2024-07-15 12:59:06.835596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:14.901 [2024-07-15 12:59:06.836140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.901 [2024-07-15 12:59:06.836171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.844564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.845131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.845161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.853520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.854035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.862578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.863166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.863201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.872392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.872955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.872985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.881432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.882047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.882077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.890755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.891379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.891409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.898733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.899225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.899263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.906653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.907149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.907179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.914823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.915315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.915345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.922345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.922836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.922865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.929919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.162 [2024-07-15 12:59:06.930399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-07-15 12:59:06.930429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.162 [2024-07-15 12:59:06.937816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.938266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.938296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.945785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.946292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.946322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.954794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.955306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.955336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.962957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.963404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.963434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.970369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.970818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.970848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.977045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.977512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.983509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.983963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.983993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.989865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.990337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.990368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:06.996230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:06.996705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:06.996735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.002632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.003093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.003124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.008972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.009446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.009476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.015103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.015562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.015592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.022190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.022665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.030043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.030556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.030585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.037231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.037712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.037743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.043363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.043833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.043863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.049783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.050244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.050281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.056197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.056642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.056683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.062597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.063069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.063098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.068549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.068987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.069018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.075344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.075783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.075813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.082743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.083246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.083283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.089913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.090340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.090370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-07-15 12:59:07.095670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.163 [2024-07-15 12:59:07.096110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-07-15 12:59:07.096140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.101822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.102265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.102295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.107531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.107964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.107995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.113144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.113559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.113589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.118362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.118736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.118766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.124276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.124708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.124738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.130620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.131033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.131063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.136978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.137452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.137482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.143111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.143483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.143513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.149570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.149997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.156189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.156570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.162850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.163325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.163355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.169295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.169745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.169776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.175990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.176392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.176423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.182408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.182855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.182885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.189234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.189686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.195981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.196449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.196480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.202547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.202966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.202996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.208831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.209182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.209212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.214511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.214876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.214907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.219510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.219807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.219842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.224577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.224885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.424 [2024-07-15 12:59:07.224917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.424 [2024-07-15 12:59:07.230650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.424 [2024-07-15 12:59:07.230952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.230983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.235650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.235946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.235977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.240911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.241249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.241289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.246467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.246746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.251515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.251827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.251857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.256663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.256943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.256973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.261740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.262047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.262077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.266803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.267088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.267118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.271822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.272112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.272142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.276923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.277242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.277280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.281994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.282291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.282321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.287192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.287529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.287559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.293145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.293442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.293472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.298839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.299189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.299219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.306176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.306563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.306594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.313371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.313774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.313808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.321394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.321802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.328994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.329346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.329377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.337238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.337617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.337648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.345244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.345587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.345617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.353074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.353472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.353502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.425 [2024-07-15 12:59:07.361443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.425 [2024-07-15 12:59:07.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.425 [2024-07-15 12:59:07.361832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.369943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.370299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.370330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.377880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.378262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.378293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.385514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.385924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.385954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.393730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.394127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.401969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.402340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.402370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.408885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.409200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.409229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.416117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.416529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.423976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.424372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.424403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.430374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.430674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.435405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.435711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.435741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.440411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.440736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.440766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.445530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.445841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.445871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.450839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.451160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.455925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.456219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.456248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.460993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.461327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.461357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.466120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.466417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.466447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.472231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.472562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.472592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.478588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.478940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.485970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.486371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.486401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.494183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.494552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.494587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.502584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.502957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.502987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.510924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.511310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.511341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.518835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.519160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.519190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.526877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.527216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.527246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.686 [2024-07-15 12:59:07.535210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.686 [2024-07-15 12:59:07.535588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.686 [2024-07-15 12:59:07.535618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.543234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.543653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.543683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.551143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.551549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.551579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.559560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.559969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.559998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.566618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.566951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.566980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.573206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.573577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.579767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.580093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.580123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.585794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.586164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.586195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.592699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.593037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.593067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.598722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.599003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.599033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.604784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.605080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.605110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.611655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.611959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.611989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.687 [2024-07-15 12:59:07.618375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.687 [2024-07-15 12:59:07.618755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.687 [2024-07-15 12:59:07.618785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.626470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.626845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.626874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.634645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.635021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.635051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.642916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.643209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.643240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.651523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.651841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.651870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.660085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.660462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.660492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.667854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.668267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.668297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.676225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.676545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.676576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.682268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.682582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.682612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.687389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.687689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.687723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.692532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.692841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.692871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.697652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.697983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.702742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.703049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.703079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.707742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.948 [2024-07-15 12:59:07.708042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.948 [2024-07-15 12:59:07.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.948 [2024-07-15 12:59:07.712885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.713181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.713212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.717864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.718189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.722999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.723322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.723352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.727991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.728296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.728326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.733007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.738065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.738387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.738417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.743157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.743507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.748164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.748477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.748509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.753214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.753524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.753555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.758216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.758538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.758568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.763293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.763588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.763618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.768297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.768607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.768637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.773365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.773682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.773713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.778368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.778672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.778702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.783396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.783706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.783736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.788456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.788765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.788795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.793535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.793847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.793876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.798573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.798875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.798905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.803636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.803938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.803968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.808768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.809059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.813775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.814095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.818859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.819184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.819219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.823895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.824226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.828952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.829244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.834020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.834316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.834345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.839135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.839643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.844463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.844774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.949 [2024-07-15 12:59:07.844803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.949 [2024-07-15 12:59:07.849537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.949 [2024-07-15 12:59:07.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.849848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.854663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.854946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.854977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.859734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.860083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.864779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.865072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.865102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.869819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.870092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.870122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.874888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.875182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.875213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.879961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.880287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.880317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.950 [2024-07-15 12:59:07.885000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:15.950 [2024-07-15 12:59:07.885307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.950 [2024-07-15 12:59:07.885337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.890044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.890353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.890383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.895094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.895432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.895461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.900188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.900503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.900533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.905202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.905501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.905536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.910248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.910573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.910602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.915312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.915623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.915653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.920326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.920628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.920657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.925418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.925725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.925755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.930460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.930769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.930799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.935575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.935884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.935914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.940607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.940910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.940940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.945734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.946037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.946067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.950831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.951130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.951160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.955887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.956226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.961045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.961367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.961397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.966096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.966408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.966438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.971126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.971433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.976167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.976459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.976489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.981262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.981578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.981608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.986321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.986623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.986652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.991383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.991702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:07.996435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:07.996739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:07.996769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:08.001516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:08.001836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.210 [2024-07-15 12:59:08.001866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.210 [2024-07-15 12:59:08.006598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.210 [2024-07-15 12:59:08.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.006939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.011643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.011946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.011976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.016676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.017005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.017035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.021712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.022025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.022055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.026781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.027107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.027136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.032017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.032333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.032363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.211 [2024-07-15 12:59:08.038194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a0cd0) with pdu=0x2000190fef90 00:29:16.211 [2024-07-15 12:59:08.038372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.211 [2024-07-15 12:59:08.038410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.211 00:29:16.211 Latency(us) 00:29:16.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.211 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:16.211 nvme0n1 : 2.00 4829.38 603.67 0.00 0.00 3304.65 2249.08 13226.36 00:29:16.211 =================================================================================================================== 00:29:16.211 Total : 4829.38 603.67 0.00 0.00 3304.65 2249.08 13226.36 00:29:16.211 0 00:29:16.211 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:16.211 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:16.211 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:16.211 | .driver_specific 00:29:16.211 | .nvme_error 00:29:16.211 | .status_code 00:29:16.211 | .command_transient_transport_error' 00:29:16.211 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 312 > 0 )) 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4105600 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4105600 ']' 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4105600 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4105600 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4105600' 00:29:16.470 killing process with pid 4105600 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4105600 00:29:16.470 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.470 00:29:16.470 Latency(us) 00:29:16.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.470 =================================================================================================================== 00:29:16.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.470 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4105600 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4102701 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4102701 ']' 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4102701 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102701 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:16.729 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102701' 00:29:16.729 killing process with pid 4102701 00:29:16.730 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4102701 00:29:16.730 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4102701 00:29:16.988 00:29:16.988 real 0m19.740s 00:29:16.988 user 0m40.759s 00:29:16.988 sys 0m4.599s 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.988 ************************************ 00:29:16.988 END TEST nvmf_digest_error 00:29:16.988 ************************************ 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.988 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:16.989 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:16.989 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:16.989 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.989 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:16.989 rmmod nvme_tcp 00:29:17.247 rmmod nvme_fabrics 00:29:17.247 rmmod nvme_keyring 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4102701 ']' 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4102701 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 4102701 ']' 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 4102701 00:29:17.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4102701) - No such process 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 4102701 is not found' 00:29:17.247 Process with pid 4102701 is not found 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.247 12:59:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.228 12:59:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.228 00:29:19.228 real 0m45.024s 00:29:19.228 user 1m17.730s 00:29:19.228 sys 0m13.261s 00:29:19.228 12:59:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:19.228 12:59:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.228 ************************************ 00:29:19.228 END TEST nvmf_digest 00:29:19.228 ************************************ 00:29:19.228 12:59:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:19.228 12:59:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:19.228 12:59:11 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:19.228 12:59:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:19.228 12:59:11 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:19.228 12:59:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:19.228 12:59:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.228 12:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.228 ************************************ 00:29:19.228 START TEST nvmf_bdevperf 00:29:19.228 ************************************ 00:29:19.228 12:59:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:19.488 * Looking for test storage... 00:29:19.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:19.488 12:59:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:24.782 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:24.782 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:24.782 Found net devices under 0000:af:00.0: cvl_0_0 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:24.782 Found net devices under 0000:af:00.1: cvl_0_1 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:24.782 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:24.783 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:29:25.041 00:29:25.041 --- 10.0.0.2 ping statistics --- 00:29:25.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.041 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:25.041 00:29:25.041 --- 10.0.0.1 ping statistics --- 00:29:25.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.041 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.041 12:59:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4109812 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4109812 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 4109812 ']' 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.042 12:59:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.042 [2024-07-15 12:59:16.902165] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:25.042 [2024-07-15 12:59:16.902223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.042 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.300 [2024-07-15 12:59:16.987658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.300 [2024-07-15 12:59:17.092682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.300 [2024-07-15 12:59:17.092727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.300 [2024-07-15 12:59:17.092741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.300 [2024-07-15 12:59:17.092752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.300 [2024-07-15 12:59:17.092762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.300 [2024-07-15 12:59:17.092906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.300 [2024-07-15 12:59:17.093263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.300 [2024-07-15 12:59:17.093263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 [2024-07-15 12:59:17.890386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 Malloc0 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.235 [2024-07-15 12:59:17.957697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:26.235 { 00:29:26.235 "params": { 00:29:26.235 "name": "Nvme$subsystem", 00:29:26.235 "trtype": "$TEST_TRANSPORT", 00:29:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.235 "adrfam": "ipv4", 00:29:26.235 "trsvcid": "$NVMF_PORT", 00:29:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.235 "hdgst": ${hdgst:-false}, 00:29:26.235 "ddgst": ${ddgst:-false} 00:29:26.235 }, 00:29:26.235 "method": "bdev_nvme_attach_controller" 00:29:26.235 } 00:29:26.235 EOF 00:29:26.235 )") 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:26.235 12:59:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:26.235 "params": { 00:29:26.235 "name": "Nvme1", 00:29:26.235 "trtype": "tcp", 00:29:26.235 "traddr": "10.0.0.2", 00:29:26.235 "adrfam": "ipv4", 00:29:26.235 "trsvcid": "4420", 00:29:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.235 "hdgst": false, 00:29:26.235 "ddgst": false 00:29:26.235 }, 00:29:26.235 "method": "bdev_nvme_attach_controller" 00:29:26.235 }' 00:29:26.235 [2024-07-15 12:59:18.011246] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:26.235 [2024-07-15 12:59:18.011320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109906 ] 00:29:26.235 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.235 [2024-07-15 12:59:18.093044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.493 [2024-07-15 12:59:18.180617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.751 Running I/O for 1 seconds... 00:29:27.687 00:29:27.687 Latency(us) 00:29:27.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:27.687 Verification LBA range: start 0x0 length 0x4000 00:29:27.687 Nvme1n1 : 1.01 6241.06 24.38 0.00 0.00 20416.46 677.70 16681.89 00:29:27.687 =================================================================================================================== 00:29:27.687 Total : 6241.06 24.38 0.00 0.00 20416.46 677.70 16681.89 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4110261 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.946 { 00:29:27.946 "params": { 00:29:27.946 "name": "Nvme$subsystem", 00:29:27.946 "trtype": "$TEST_TRANSPORT", 00:29:27.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.946 "adrfam": "ipv4", 00:29:27.946 "trsvcid": "$NVMF_PORT", 00:29:27.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.946 "hdgst": ${hdgst:-false}, 00:29:27.946 "ddgst": ${ddgst:-false} 00:29:27.946 }, 00:29:27.946 "method": "bdev_nvme_attach_controller" 00:29:27.946 } 00:29:27.946 EOF 00:29:27.946 )") 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:27.946 12:59:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:27.946 "params": { 00:29:27.946 "name": "Nvme1", 00:29:27.946 "trtype": "tcp", 00:29:27.946 "traddr": "10.0.0.2", 00:29:27.946 "adrfam": "ipv4", 00:29:27.946 "trsvcid": "4420", 00:29:27.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.946 "hdgst": false, 00:29:27.946 "ddgst": false 00:29:27.946 }, 00:29:27.946 "method": "bdev_nvme_attach_controller" 00:29:27.946 }' 00:29:27.947 [2024-07-15 12:59:19.736370] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:27.947 [2024-07-15 12:59:19.736432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110261 ] 00:29:27.947 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.947 [2024-07-15 12:59:19.816755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.205 [2024-07-15 12:59:19.902643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.464 Running I/O for 15 seconds... 00:29:30.999 12:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4109812 00:29:30.999 12:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:30.999 [2024-07-15 12:59:22.703744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.703979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.703989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.999 [2024-07-15 12:59:22.704186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.999 [2024-07-15 12:59:22.704201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.999 [2024-07-15 12:59:22.704214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.000 [2024-07-15 12:59:22.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.704981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.704993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.000 [2024-07-15 12:59:22.705298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.000 [2024-07-15 12:59:22.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.705981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.705990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.001 [2024-07-15 12:59:22.706241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.001 [2024-07-15 12:59:22.706251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.002 [2024-07-15 12:59:22.706472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.002 [2024-07-15 12:59:22.706806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d81c0 is same with the state(5) to be set 00:29:31.002 [2024-07-15 12:59:22.706828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.002 [2024-07-15 12:59:22.706836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.002 [2024-07-15 12:59:22.706844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124080 len:8 PRP1 0x0 PRP2 0x0 00:29:31.002 [2024-07-15 12:59:22.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.002 [2024-07-15 12:59:22.706904] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23d81c0 was disconnected and freed. reset controller. 00:29:31.002 [2024-07-15 12:59:22.711187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.002 [2024-07-15 12:59:22.711249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.002 [2024-07-15 12:59:22.712036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-15 12:59:22.712056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.002 [2024-07-15 12:59:22.712067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.002 [2024-07-15 12:59:22.712342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.002 [2024-07-15 12:59:22.712608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.002 [2024-07-15 12:59:22.712619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.002 [2024-07-15 12:59:22.712630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.002 [2024-07-15 12:59:22.716890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.002 [2024-07-15 12:59:22.726201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.002 [2024-07-15 12:59:22.726624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-15 12:59:22.726646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.002 [2024-07-15 12:59:22.726657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.002 [2024-07-15 12:59:22.726922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.002 [2024-07-15 12:59:22.727187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.002 [2024-07-15 12:59:22.727199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.002 [2024-07-15 12:59:22.727209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.002 [2024-07-15 12:59:22.731478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.002 [2024-07-15 12:59:22.740782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.002 [2024-07-15 12:59:22.741313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-15 12:59:22.741358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.002 [2024-07-15 12:59:22.741380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.002 [2024-07-15 12:59:22.741921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.002 [2024-07-15 12:59:22.742187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.002 [2024-07-15 12:59:22.742199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.002 [2024-07-15 12:59:22.742208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.002 [2024-07-15 12:59:22.746478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.002 [2024-07-15 12:59:22.755704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.002 [2024-07-15 12:59:22.756227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-15 12:59:22.756250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.002 [2024-07-15 12:59:22.756272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.002 [2024-07-15 12:59:22.756537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.756803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.756816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.756826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.761083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.770387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.770959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.771001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.771023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.771616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.771930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.771941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.771951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.776205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.784992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.785536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.785558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.785569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.785832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.786097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.786109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.786118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.790374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.799653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.800225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.800280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.800304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.800866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.801272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.801289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.801303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.807554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.814586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.814987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.815008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.815018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.815289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.815555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.815567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.815576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.819831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.829369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.829821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.829842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.829852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.830117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.830388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.830401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.830411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.834661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.843943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.844353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.844374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.844385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.844649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.844914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.844925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.844934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.849189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.858733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.859261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.859307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.859330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.859907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.860173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.860184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.860193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.864447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.873488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.873936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.873957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.873967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.874231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.003 [2024-07-15 12:59:22.874503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.003 [2024-07-15 12:59:22.874516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.003 [2024-07-15 12:59:22.874525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.003 [2024-07-15 12:59:22.878774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.003 [2024-07-15 12:59:22.888062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.003 [2024-07-15 12:59:22.888573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-15 12:59:22.888594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.003 [2024-07-15 12:59:22.888604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.003 [2024-07-15 12:59:22.888868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.004 [2024-07-15 12:59:22.889133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.004 [2024-07-15 12:59:22.889145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.004 [2024-07-15 12:59:22.889155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.004 [2024-07-15 12:59:22.893409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.004 [2024-07-15 12:59:22.902693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.004 [2024-07-15 12:59:22.903162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-15 12:59:22.903182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.004 [2024-07-15 12:59:22.903196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.004 [2024-07-15 12:59:22.903469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.004 [2024-07-15 12:59:22.903734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.004 [2024-07-15 12:59:22.903745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.004 [2024-07-15 12:59:22.903755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.004 [2024-07-15 12:59:22.908007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.004 [2024-07-15 12:59:22.917300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.004 [2024-07-15 12:59:22.917783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-15 12:59:22.917804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.004 [2024-07-15 12:59:22.917814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.004 [2024-07-15 12:59:22.918078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.004 [2024-07-15 12:59:22.918352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.004 [2024-07-15 12:59:22.918364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.004 [2024-07-15 12:59:22.918373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.004 [2024-07-15 12:59:22.922628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.004 [2024-07-15 12:59:22.931913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.004 [2024-07-15 12:59:22.932398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-15 12:59:22.932441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.004 [2024-07-15 12:59:22.932463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.004 [2024-07-15 12:59:22.933043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.004 [2024-07-15 12:59:22.933375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.004 [2024-07-15 12:59:22.933387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.004 [2024-07-15 12:59:22.933396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.263 [2024-07-15 12:59:22.937650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.263 [2024-07-15 12:59:22.946689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.263 [2024-07-15 12:59:22.947141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.263 [2024-07-15 12:59:22.947162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.263 [2024-07-15 12:59:22.947172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.263 [2024-07-15 12:59:22.947444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.263 [2024-07-15 12:59:22.947709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.263 [2024-07-15 12:59:22.947724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.263 [2024-07-15 12:59:22.947734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.263 [2024-07-15 12:59:22.951984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.263 [2024-07-15 12:59:22.961275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.263 [2024-07-15 12:59:22.961733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.263 [2024-07-15 12:59:22.961754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.263 [2024-07-15 12:59:22.961764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.263 [2024-07-15 12:59:22.962029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.263 [2024-07-15 12:59:22.962300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.263 [2024-07-15 12:59:22.962312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.263 [2024-07-15 12:59:22.962322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.263 [2024-07-15 12:59:22.966574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:22.975857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:22.976433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:22.976454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:22.976464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:22.976728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:22.976993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:22.977004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:22.977013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:22.981268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:22.990555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:22.991019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:22.991040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:22.991050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:22.991320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:22.991586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:22.991597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:22.991607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:22.995860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.005139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.005707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.005728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.005738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.006003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.006274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.006286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.006296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.010545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.019823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.020358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.020401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.020422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.020999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.021375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.021387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.021397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.025647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.034417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.034990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.035031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.035053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.035598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.035864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.035875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.035885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.040131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.049156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.049733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.049775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.049797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.050398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.050709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.050721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.050730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.054975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.063748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.064230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.064252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.064270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.064533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.064797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.064811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.064820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.069067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.078338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.078858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.078879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.078889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.079153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.079425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.079438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.079447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.083689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.092963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.093531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.093573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.093595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.094147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.094543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.094560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.094579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.100817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.108008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.108559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.108580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.108590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.108855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.109119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.109130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.109140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.113395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.264 [2024-07-15 12:59:23.122670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.264 [2024-07-15 12:59:23.123263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.264 [2024-07-15 12:59:23.123306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.264 [2024-07-15 12:59:23.123327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.264 [2024-07-15 12:59:23.123871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.264 [2024-07-15 12:59:23.124136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.264 [2024-07-15 12:59:23.124147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.264 [2024-07-15 12:59:23.124156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.264 [2024-07-15 12:59:23.128408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.265 [2024-07-15 12:59:23.137436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.265 [2024-07-15 12:59:23.137996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.265 [2024-07-15 12:59:23.138038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.265 [2024-07-15 12:59:23.138060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.265 [2024-07-15 12:59:23.138635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.265 [2024-07-15 12:59:23.138901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.265 [2024-07-15 12:59:23.138913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.265 [2024-07-15 12:59:23.138922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.265 [2024-07-15 12:59:23.143176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.265 [2024-07-15 12:59:23.152192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.265 [2024-07-15 12:59:23.152749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.265 [2024-07-15 12:59:23.152801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.265 [2024-07-15 12:59:23.152824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.265 [2024-07-15 12:59:23.153417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.265 [2024-07-15 12:59:23.153991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.265 [2024-07-15 12:59:23.154002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.265 [2024-07-15 12:59:23.154011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.265 [2024-07-15 12:59:23.158261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.265 [2024-07-15 12:59:23.166775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.265 [2024-07-15 12:59:23.167339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.265 [2024-07-15 12:59:23.167382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.265 [2024-07-15 12:59:23.167403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.265 [2024-07-15 12:59:23.167950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.265 [2024-07-15 12:59:23.168215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.265 [2024-07-15 12:59:23.168226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.265 [2024-07-15 12:59:23.168235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.265 [2024-07-15 12:59:23.172491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.265 [2024-07-15 12:59:23.181517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.265 [2024-07-15 12:59:23.181996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.265 [2024-07-15 12:59:23.182017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.265 [2024-07-15 12:59:23.182027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.265 [2024-07-15 12:59:23.182299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.265 [2024-07-15 12:59:23.182565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.265 [2024-07-15 12:59:23.182576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.265 [2024-07-15 12:59:23.182586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.265 [2024-07-15 12:59:23.186837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.265 [2024-07-15 12:59:23.196112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.265 [2024-07-15 12:59:23.196662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.265 [2024-07-15 12:59:23.196683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.265 [2024-07-15 12:59:23.196694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.265 [2024-07-15 12:59:23.196957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.265 [2024-07-15 12:59:23.197226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.265 [2024-07-15 12:59:23.197238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.265 [2024-07-15 12:59:23.197247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.265 [2024-07-15 12:59:23.201504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.524 [2024-07-15 12:59:23.210785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.524 [2024-07-15 12:59:23.211244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.524 [2024-07-15 12:59:23.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.524 [2024-07-15 12:59:23.211281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.524 [2024-07-15 12:59:23.211546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.524 [2024-07-15 12:59:23.211811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.524 [2024-07-15 12:59:23.211822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.524 [2024-07-15 12:59:23.211832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.524 [2024-07-15 12:59:23.216083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.524 [2024-07-15 12:59:23.225368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.524 [2024-07-15 12:59:23.225922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.524 [2024-07-15 12:59:23.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.524 [2024-07-15 12:59:23.225953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.524 [2024-07-15 12:59:23.226218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.524 [2024-07-15 12:59:23.226490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.524 [2024-07-15 12:59:23.226502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.524 [2024-07-15 12:59:23.226511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.524 [2024-07-15 12:59:23.230763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.240030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.240609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.240630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.240641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.240905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.241169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.241180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.241190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.245457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.254732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.255285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.255306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.255316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.255580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.255844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.255855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.255864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.260113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.269446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.269984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.270037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.270059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.270653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.271213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.271225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.271234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.275484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.284009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.284573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.284615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.284637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.285188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.285461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.285473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.285482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.289726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.298743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.299278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.299320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.299349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.299940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.300207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.300218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.300227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.304485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.313509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.314039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.314060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.314071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.314343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.314609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.314621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.314630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.318873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.328150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.328681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.328703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.328713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.328978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.329244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.329261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.329271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.333515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.342799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.343285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.343326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.343348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.343926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.344269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.344289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.344298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.348546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.357571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.358127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.358148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.358158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.358431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.358697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.358708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.358718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.362984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.372263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.372819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.372840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.372850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.373114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.373385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.373397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.373406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.525 [2024-07-15 12:59:23.377656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.525 [2024-07-15 12:59:23.386941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.525 [2024-07-15 12:59:23.387503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.525 [2024-07-15 12:59:23.387546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.525 [2024-07-15 12:59:23.387567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.525 [2024-07-15 12:59:23.388155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.525 [2024-07-15 12:59:23.388426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.525 [2024-07-15 12:59:23.388438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.525 [2024-07-15 12:59:23.388448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.526 [2024-07-15 12:59:23.392698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.526 [2024-07-15 12:59:23.401730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.526 [2024-07-15 12:59:23.402202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.526 [2024-07-15 12:59:23.402223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.526 [2024-07-15 12:59:23.402233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.526 [2024-07-15 12:59:23.402505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.526 [2024-07-15 12:59:23.402769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.526 [2024-07-15 12:59:23.402781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.526 [2024-07-15 12:59:23.402790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.526 [2024-07-15 12:59:23.407043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.526 [2024-07-15 12:59:23.416324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.526 [2024-07-15 12:59:23.416847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.526 [2024-07-15 12:59:23.416868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.526 [2024-07-15 12:59:23.416878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.526 [2024-07-15 12:59:23.417142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.526 [2024-07-15 12:59:23.417414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.526 [2024-07-15 12:59:23.417426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.526 [2024-07-15 12:59:23.417436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.526 [2024-07-15 12:59:23.421681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.526 [2024-07-15 12:59:23.430952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.526 [2024-07-15 12:59:23.431484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.526 [2024-07-15 12:59:23.431505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.526 [2024-07-15 12:59:23.431516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.526 [2024-07-15 12:59:23.431780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.526 [2024-07-15 12:59:23.432045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.526 [2024-07-15 12:59:23.432056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.526 [2024-07-15 12:59:23.432066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.526 [2024-07-15 12:59:23.436320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.526 [2024-07-15 12:59:23.445602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.526 [2024-07-15 12:59:23.446155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.526 [2024-07-15 12:59:23.446176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.526 [2024-07-15 12:59:23.446190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.526 [2024-07-15 12:59:23.446462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.526 [2024-07-15 12:59:23.446727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.526 [2024-07-15 12:59:23.446738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.526 [2024-07-15 12:59:23.446747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.526 [2024-07-15 12:59:23.450990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.526 [2024-07-15 12:59:23.460298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.526 [2024-07-15 12:59:23.460854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.526 [2024-07-15 12:59:23.460876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.526 [2024-07-15 12:59:23.460886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.526 [2024-07-15 12:59:23.461151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.526 [2024-07-15 12:59:23.461426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.526 [2024-07-15 12:59:23.461438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.526 [2024-07-15 12:59:23.461448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.465693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.474974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.475533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.475554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.475565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.475829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.476093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.476105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.476114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.480364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.489644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.490138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.490148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.490420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.490685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.490697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.490711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.494961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.504232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.504802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.504845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.504867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.505460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.505740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.505751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.505761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.510010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.518779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.519327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.519358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.519622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.519887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.519898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.519908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.524159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.533427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.533987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.534029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.534051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.534624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.534889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.534901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.534910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.539159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.548445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.549009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.549030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.549041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.549313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.549580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.549591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.549600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.553853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.563133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.563671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.563694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.563704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.563968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.564233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.564244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.564261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.568514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.577788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.578267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.578289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.578299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.578564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.578829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.578841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.578850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.583099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.592375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.592949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.786 [2024-07-15 12:59:23.593014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.786 [2024-07-15 12:59:23.593620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.786 [2024-07-15 12:59:23.593930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.786 [2024-07-15 12:59:23.593942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.786 [2024-07-15 12:59:23.593951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.786 [2024-07-15 12:59:23.598195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.786 [2024-07-15 12:59:23.606967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.786 [2024-07-15 12:59:23.607529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.786 [2024-07-15 12:59:23.607550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.607560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.607825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.608090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.608101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.608111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.612364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.621631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.622189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.622230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.622251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.622806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.623072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.623083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.623092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.627350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.636372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.636907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.636949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.636971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.637562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.638108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.638120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.638133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.642394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.650915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.651364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.651385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.651395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.651659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.651923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.651934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.651943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.656196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.665476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.666015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.666036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.666046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.666318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.666583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.666595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.666604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.670850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.680120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.680691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.680734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.680755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.681352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.681617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.681629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.681638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.685880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.694898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.695348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.695373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.695383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.695648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.695913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.695925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.695934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.700190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.709470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.710019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.710040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.710050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:31.787 [2024-07-15 12:59:23.710321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:31.787 [2024-07-15 12:59:23.710588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.787 [2024-07-15 12:59:23.710599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.787 [2024-07-15 12:59:23.710609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.787 [2024-07-15 12:59:23.714958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.787 [2024-07-15 12:59:23.724241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.787 [2024-07-15 12:59:23.724774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.787 [2024-07-15 12:59:23.724817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:31.787 [2024-07-15 12:59:23.724839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.047 [2024-07-15 12:59:23.725460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.047 [2024-07-15 12:59:23.725729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.047 [2024-07-15 12:59:23.725741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.047 [2024-07-15 12:59:23.725751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.047 [2024-07-15 12:59:23.730000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.047 [2024-07-15 12:59:23.739026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.047 [2024-07-15 12:59:23.739477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-07-15 12:59:23.739499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.047 [2024-07-15 12:59:23.739509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.047 [2024-07-15 12:59:23.739774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.047 [2024-07-15 12:59:23.740043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.047 [2024-07-15 12:59:23.740055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.047 [2024-07-15 12:59:23.740064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.047 [2024-07-15 12:59:23.744328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.047 [2024-07-15 12:59:23.753603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.047 [2024-07-15 12:59:23.754165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-07-15 12:59:23.754206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.047 [2024-07-15 12:59:23.754228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.047 [2024-07-15 12:59:23.754821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.047 [2024-07-15 12:59:23.755412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.047 [2024-07-15 12:59:23.755444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.047 [2024-07-15 12:59:23.755453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.047 [2024-07-15 12:59:23.759700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.047 [2024-07-15 12:59:23.768213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.047 [2024-07-15 12:59:23.768781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-07-15 12:59:23.768823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.047 [2024-07-15 12:59:23.768844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.047 [2024-07-15 12:59:23.769340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.047 [2024-07-15 12:59:23.769606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.047 [2024-07-15 12:59:23.769617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.047 [2024-07-15 12:59:23.769626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.047 [2024-07-15 12:59:23.773871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.047 [2024-07-15 12:59:23.782895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.047 [2024-07-15 12:59:23.783335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-07-15 12:59:23.783356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.047 [2024-07-15 12:59:23.783367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.047 [2024-07-15 12:59:23.783631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.047 [2024-07-15 12:59:23.783896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.047 [2024-07-15 12:59:23.783907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.047 [2024-07-15 12:59:23.783916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.047 [2024-07-15 12:59:23.788171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.047 [2024-07-15 12:59:23.797441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.047 [2024-07-15 12:59:23.797991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.798011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.798021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.798292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.798559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.798571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.798580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.802829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.812108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.812672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.812715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.812737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.813329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.813614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.813625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.813634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.817877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.826654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.827181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.827201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.827212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.827483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.827749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.827760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.827770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.832017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.841294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.841844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.841865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.841879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.842144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.842421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.842434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.842443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.846689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.855958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.856527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.856591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.857170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.857529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.857542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.857551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.861795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.870562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.871125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.871156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.871429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.871694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.871705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.871715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.875958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.885228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.885792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.885834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.885856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.886450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.886897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.886912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.886921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.891159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.899922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.900478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.900520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.900542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.901121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.901716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.901742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.901773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.906024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.914546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.915095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.915137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.915159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.915685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.915951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.915962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.915971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.920218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.929231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.929787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.929808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.929818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.930082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.930353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.930367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.930376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.934621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.943912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.944435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-07-15 12:59:23.944456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.048 [2024-07-15 12:59:23.944466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.048 [2024-07-15 12:59:23.944731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.048 [2024-07-15 12:59:23.944996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.048 [2024-07-15 12:59:23.945007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.048 [2024-07-15 12:59:23.945017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.048 [2024-07-15 12:59:23.949269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.048 [2024-07-15 12:59:23.958545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.048 [2024-07-15 12:59:23.959011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 12:59:23.959032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.049 [2024-07-15 12:59:23.959042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.049 [2024-07-15 12:59:23.959311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.049 [2024-07-15 12:59:23.959577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.049 [2024-07-15 12:59:23.959588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.049 [2024-07-15 12:59:23.959597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.049 [2024-07-15 12:59:23.963840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.049 [2024-07-15 12:59:23.973114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.049 [2024-07-15 12:59:23.973683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-07-15 12:59:23.973704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.049 [2024-07-15 12:59:23.973714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.049 [2024-07-15 12:59:23.973978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.049 [2024-07-15 12:59:23.974244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.049 [2024-07-15 12:59:23.974262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.049 [2024-07-15 12:59:23.974272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.049 [2024-07-15 12:59:23.978514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:23.987798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:23.988295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:23.988338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:23.988360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:23.988898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:23.989162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:23.989174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:23.989183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:23.993434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.002467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.003058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.003099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.003121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.003671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.003937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.003948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.003957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.008210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.017233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.017790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.017812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.017822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.018086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.018358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.018370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.018380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.022632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.031927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.032485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.032506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.032517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.032781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.033046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.033057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.033070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.037323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.046611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.047169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.047213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.047235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.047829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.048095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.048107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.048116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.052374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.061166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.061656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.061679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.061689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.061953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.062218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.062230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.062239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.066503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.075805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.076378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.076421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.076442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.077020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.077317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.077329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.077339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.081601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.090402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.090815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.090836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.090846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.091109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.091383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.091396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.091405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.095665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.104964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.105510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.105531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.105541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.105806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.106072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.106083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.106092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.110347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.119630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.120198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.120239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.120272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.120851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.121372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.121385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.121394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.125648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.134183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.134721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.134765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.134787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.135291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.135558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.135569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.135578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.139837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.148885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.149374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.149396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.149406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.149671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.149936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.149947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.149956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.154203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.163493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.164040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.164081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.164103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.164695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.165290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.165314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.165338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.169592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.178129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.178590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.178611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.178621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.308 [2024-07-15 12:59:24.178885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.308 [2024-07-15 12:59:24.179149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.308 [2024-07-15 12:59:24.179161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.308 [2024-07-15 12:59:24.179174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.308 [2024-07-15 12:59:24.183440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.308 [2024-07-15 12:59:24.192741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.308 [2024-07-15 12:59:24.193241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.308 [2024-07-15 12:59:24.193269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.308 [2024-07-15 12:59:24.193280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.309 [2024-07-15 12:59:24.193544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.309 [2024-07-15 12:59:24.193809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.309 [2024-07-15 12:59:24.193820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.309 [2024-07-15 12:59:24.193830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.309 [2024-07-15 12:59:24.198087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.309 [2024-07-15 12:59:24.207396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.309 [2024-07-15 12:59:24.207925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-07-15 12:59:24.207973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-07-15 12:59:24.207994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.309 [2024-07-15 12:59:24.208546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.309 [2024-07-15 12:59:24.208811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.309 [2024-07-15 12:59:24.208823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.309 [2024-07-15 12:59:24.208832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.309 [2024-07-15 12:59:24.213093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.309 [2024-07-15 12:59:24.222141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.309 [2024-07-15 12:59:24.222620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-07-15 12:59:24.222641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-07-15 12:59:24.222651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.309 [2024-07-15 12:59:24.222915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.309 [2024-07-15 12:59:24.223179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.309 [2024-07-15 12:59:24.223191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.309 [2024-07-15 12:59:24.223200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.309 [2024-07-15 12:59:24.227470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.309 [2024-07-15 12:59:24.236768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.309 [2024-07-15 12:59:24.237343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.309 [2024-07-15 12:59:24.237393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.309 [2024-07-15 12:59:24.237415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.309 [2024-07-15 12:59:24.237849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.309 [2024-07-15 12:59:24.238237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.309 [2024-07-15 12:59:24.238253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.309 [2024-07-15 12:59:24.238279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.309 [2024-07-15 12:59:24.244544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.568 [2024-07-15 12:59:24.251796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.568 [2024-07-15 12:59:24.252327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.568 [2024-07-15 12:59:24.252349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.568 [2024-07-15 12:59:24.252360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.568 [2024-07-15 12:59:24.252624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.568 [2024-07-15 12:59:24.252888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.568 [2024-07-15 12:59:24.252899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.568 [2024-07-15 12:59:24.252909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.568 [2024-07-15 12:59:24.257159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.568 [2024-07-15 12:59:24.266442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.568 [2024-07-15 12:59:24.266969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.568 [2024-07-15 12:59:24.266990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.568 [2024-07-15 12:59:24.267000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.568 [2024-07-15 12:59:24.267271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.568 [2024-07-15 12:59:24.267536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.568 [2024-07-15 12:59:24.267548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.568 [2024-07-15 12:59:24.267557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.568 [2024-07-15 12:59:24.271806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.568 [2024-07-15 12:59:24.281092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.568 [2024-07-15 12:59:24.281671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.568 [2024-07-15 12:59:24.281713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.568 [2024-07-15 12:59:24.281735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.568 [2024-07-15 12:59:24.282325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.568 [2024-07-15 12:59:24.282816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.568 [2024-07-15 12:59:24.282828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.282837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.287091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.295881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.296444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.296489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.296511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.297089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.297681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.297706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.297732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.301976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.310522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.311022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.311043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.311053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.311325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.311591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.311603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.311612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.315855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.325140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.325738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.325781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.325802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.326390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.326656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.326668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.326677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.330935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.339728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.340241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.340268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.340279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.340544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.340809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.340820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.340829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.345089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.354379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.354895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.354937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.354958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.355552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.356001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.356013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.356022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.360280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.369052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.369475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.369517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.369538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.370118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.370725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.370738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.370747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.374998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.383790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.384377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.384405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.384420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.384684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.384950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.384961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.384971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.389225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.398520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.399026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.399046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.399057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.399329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.399596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.399608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.399617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.403872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.413160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.413623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.413644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.413654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.413919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.414184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.414195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.414205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.418469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.427754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.428207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.428228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.428238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.428510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.569 [2024-07-15 12:59:24.428776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.569 [2024-07-15 12:59:24.428791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.569 [2024-07-15 12:59:24.428800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.569 [2024-07-15 12:59:24.433048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.569 [2024-07-15 12:59:24.442365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.569 [2024-07-15 12:59:24.442873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.569 [2024-07-15 12:59:24.442895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.569 [2024-07-15 12:59:24.442905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.569 [2024-07-15 12:59:24.443170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.570 [2024-07-15 12:59:24.443442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.570 [2024-07-15 12:59:24.443454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.570 [2024-07-15 12:59:24.443464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.570 [2024-07-15 12:59:24.447713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.570 [2024-07-15 12:59:24.456990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.570 [2024-07-15 12:59:24.457524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.570 [2024-07-15 12:59:24.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.570 [2024-07-15 12:59:24.457555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.570 [2024-07-15 12:59:24.457819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.570 [2024-07-15 12:59:24.458083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.570 [2024-07-15 12:59:24.458094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.570 [2024-07-15 12:59:24.458104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.570 [2024-07-15 12:59:24.462353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.570 [2024-07-15 12:59:24.471620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.570 [2024-07-15 12:59:24.472126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.570 [2024-07-15 12:59:24.472146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.570 [2024-07-15 12:59:24.472156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.570 [2024-07-15 12:59:24.472427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.570 [2024-07-15 12:59:24.472692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.570 [2024-07-15 12:59:24.472704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.570 [2024-07-15 12:59:24.472713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.570 [2024-07-15 12:59:24.476957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.570 [2024-07-15 12:59:24.486239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.570 [2024-07-15 12:59:24.486748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.570 [2024-07-15 12:59:24.486769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.570 [2024-07-15 12:59:24.486779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.570 [2024-07-15 12:59:24.487042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.570 [2024-07-15 12:59:24.487314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.570 [2024-07-15 12:59:24.487326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.570 [2024-07-15 12:59:24.487335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.570 [2024-07-15 12:59:24.491586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.570 [2024-07-15 12:59:24.500877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.570 [2024-07-15 12:59:24.501427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.570 [2024-07-15 12:59:24.501449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.570 [2024-07-15 12:59:24.501459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.570 [2024-07-15 12:59:24.501723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.570 [2024-07-15 12:59:24.501988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.570 [2024-07-15 12:59:24.501999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.570 [2024-07-15 12:59:24.502008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.570 [2024-07-15 12:59:24.506272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.829 [2024-07-15 12:59:24.515571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.829 [2024-07-15 12:59:24.516126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.829 [2024-07-15 12:59:24.516148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.829 [2024-07-15 12:59:24.516158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.516430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.516697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.516708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.516718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.520974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.530285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.530687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.530708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.530718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.530985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.531250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.531270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.531280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.535531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.545106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.545812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.545836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.545847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.546113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.546384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.546397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.546406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.550659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.559688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.560219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.560240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.560250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.560524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.560789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.560800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.560810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.565054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.574327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.574777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.574798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.574808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.575072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.575345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.575357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.575371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.579613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.588864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.589394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.589437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.589459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.590038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.590375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.590387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.590397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.594643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.603414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.603948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.604001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.604023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.604572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.604838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.604849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.604859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.609101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.618121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.618627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.618648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.618659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.618923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.619187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.619199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.619208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.623464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.632732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.633188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.633209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.633219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.633492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.633757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.633769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.633778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.638020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.647299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.647824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.647845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.647855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.648119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.648392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.648404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.648414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.652656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.661924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.662424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.830 [2024-07-15 12:59:24.662445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.830 [2024-07-15 12:59:24.662455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.830 [2024-07-15 12:59:24.662719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.830 [2024-07-15 12:59:24.662983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.830 [2024-07-15 12:59:24.662995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.830 [2024-07-15 12:59:24.663004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.830 [2024-07-15 12:59:24.667250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.830 [2024-07-15 12:59:24.676518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.830 [2024-07-15 12:59:24.677046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.677097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.677119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.677712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.678055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.678067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.678076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.682325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.691090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.691601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.691643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.691665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.692243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.692693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.692705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.692714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.696956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.705729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.706267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.706289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.706298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.706564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.706829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.706840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.706850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.711092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.720371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.720898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.720919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.720929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.721193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.721466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.721479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.721488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.725742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.735013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.735547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.735569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.735579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.735843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.736107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.736118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.736128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.740577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.749730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.750205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.750249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.750284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.750791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.751055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.751067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.751076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.831 [2024-07-15 12:59:24.755329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.831 [2024-07-15 12:59:24.764360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.831 [2024-07-15 12:59:24.764835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.831 [2024-07-15 12:59:24.764878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:32.831 [2024-07-15 12:59:24.764900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:32.831 [2024-07-15 12:59:24.765495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:32.831 [2024-07-15 12:59:24.765795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.831 [2024-07-15 12:59:24.765807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.831 [2024-07-15 12:59:24.765816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.770066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.779095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.779629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.779654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.779664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.779929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.780194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.780205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.780215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.784470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.793739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.794196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.794216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.794226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.794499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.794764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.794775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.794784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.799028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.808290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.808828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.808881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.808903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.809495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.809846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.809857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.809867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.814111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.822887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.823392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.823414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.823424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.823689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.823962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.823974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.823983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.828230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.837503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.838030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.838079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.838100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.838691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.838956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.838968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.838977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.843237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.852263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.852733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.852754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.852764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.853028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.853301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.853314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.853323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.857574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.866841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.867369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.867390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.867400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.867665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.867929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.867941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.867950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.872201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.881475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.881943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.881965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.881975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.882238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.882511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.882523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.882532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.886778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.896044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.896575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.896597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.896607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.896872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.897136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.091 [2024-07-15 12:59:24.897147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.091 [2024-07-15 12:59:24.897156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.091 [2024-07-15 12:59:24.901405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.091 [2024-07-15 12:59:24.910675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.091 [2024-07-15 12:59:24.911213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.091 [2024-07-15 12:59:24.911268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.091 [2024-07-15 12:59:24.911291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.091 [2024-07-15 12:59:24.911870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.091 [2024-07-15 12:59:24.912210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.912221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.912231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.916483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.925248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.925780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.925801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.925815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.926079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.926350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.926363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.926372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.930615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.939875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.940345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.940366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.940376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.940639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.940904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.940916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.940925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.945187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.954467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.955024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.955056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.955328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.955595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.955607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.955618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.959870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.969162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.969643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.969663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.969674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.969937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.970202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.970217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.970226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.974481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.983763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.984302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.984345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.984367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.984944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.985412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.985425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.985434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:24.989684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:24.998475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:24.999057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:24.999078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:24.999088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:24.999359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:24.999624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:24.999636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:24.999645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:25.003901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:25.013196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:25.013763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:25.013806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:25.013827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:25.014386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:25.014651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:25.014663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:25.014672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.092 [2024-07-15 12:59:25.018923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.092 [2024-07-15 12:59:25.027963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.092 [2024-07-15 12:59:25.028504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.092 [2024-07-15 12:59:25.028525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.092 [2024-07-15 12:59:25.028536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.092 [2024-07-15 12:59:25.028799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.092 [2024-07-15 12:59:25.029064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.092 [2024-07-15 12:59:25.029076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.092 [2024-07-15 12:59:25.029085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.351 [2024-07-15 12:59:25.033351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.351 [2024-07-15 12:59:25.042651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.351 [2024-07-15 12:59:25.043207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.351 [2024-07-15 12:59:25.043228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.351 [2024-07-15 12:59:25.043238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.351 [2024-07-15 12:59:25.043510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.351 [2024-07-15 12:59:25.043775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.351 [2024-07-15 12:59:25.043787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.351 [2024-07-15 12:59:25.043796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.351 [2024-07-15 12:59:25.048056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.351 [2024-07-15 12:59:25.057358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.351 [2024-07-15 12:59:25.057958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.351 [2024-07-15 12:59:25.058000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.351 [2024-07-15 12:59:25.058022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.351 [2024-07-15 12:59:25.058537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.351 [2024-07-15 12:59:25.058804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.351 [2024-07-15 12:59:25.058815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.351 [2024-07-15 12:59:25.058825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.351 [2024-07-15 12:59:25.063078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.351 [2024-07-15 12:59:25.072101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.351 [2024-07-15 12:59:25.072631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.351 [2024-07-15 12:59:25.072653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.351 [2024-07-15 12:59:25.072663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.351 [2024-07-15 12:59:25.072930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.351 [2024-07-15 12:59:25.073195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.351 [2024-07-15 12:59:25.073206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.351 [2024-07-15 12:59:25.073216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.077476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.086753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.087299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.087321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.087331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.087596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.087860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.087872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.087881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.092131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.101404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.101863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.101884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.101894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.102158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.102430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.102442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.102452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.106696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.115973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.116545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.116587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.116609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.117187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.117678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.117690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.117703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.121949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.130720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.131289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.131333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.131355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.131788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.132052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.132063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.132072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.136323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.145354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.145877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.145898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.145908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.146172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.146444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.146457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.146466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.150713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.159971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.160533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.160576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.160598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.161177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.161687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.161699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.161709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.165966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.174740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.175270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.175295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.175305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.175568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.175833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.175845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.175854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.180100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.189371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.189936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.189978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.190000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.190595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.190920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.190931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.190940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.195186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.203954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.204509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.204530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.204540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.204804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.205068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.205079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.205089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.209338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.218599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.219177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.219218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.219240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.219833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.220121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.352 [2024-07-15 12:59:25.220133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.352 [2024-07-15 12:59:25.220142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.352 [2024-07-15 12:59:25.224396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.352 [2024-07-15 12:59:25.233158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.352 [2024-07-15 12:59:25.233639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.352 [2024-07-15 12:59:25.233660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.352 [2024-07-15 12:59:25.233670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.352 [2024-07-15 12:59:25.233934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.352 [2024-07-15 12:59:25.234198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.353 [2024-07-15 12:59:25.234210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.353 [2024-07-15 12:59:25.234220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.353 [2024-07-15 12:59:25.238472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.353 [2024-07-15 12:59:25.247753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.353 [2024-07-15 12:59:25.248322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.353 [2024-07-15 12:59:25.248363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.353 [2024-07-15 12:59:25.248385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.353 [2024-07-15 12:59:25.248946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.353 [2024-07-15 12:59:25.249211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.353 [2024-07-15 12:59:25.249222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.353 [2024-07-15 12:59:25.249231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.353 [2024-07-15 12:59:25.253486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.353 [2024-07-15 12:59:25.262519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.353 [2024-07-15 12:59:25.263080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.353 [2024-07-15 12:59:25.263122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.353 [2024-07-15 12:59:25.263144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.353 [2024-07-15 12:59:25.263736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.353 [2024-07-15 12:59:25.264002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.353 [2024-07-15 12:59:25.264013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.353 [2024-07-15 12:59:25.264022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.353 [2024-07-15 12:59:25.268274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.353 [2024-07-15 12:59:25.277295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.353 [2024-07-15 12:59:25.277850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.353 [2024-07-15 12:59:25.277870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.353 [2024-07-15 12:59:25.277880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.353 [2024-07-15 12:59:25.278145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.353 [2024-07-15 12:59:25.278417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.353 [2024-07-15 12:59:25.278429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.353 [2024-07-15 12:59:25.278438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.353 [2024-07-15 12:59:25.282681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.291955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.292499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.292554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.292576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.293161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.293432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.293445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.293454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.297702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.306723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.307287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.307329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.307351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.307896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.308160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.308172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.308181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.312434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.321456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.322026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.322068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.322096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.322689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.323205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.323216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.323225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.327480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.336245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.336813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.336854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.336876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.337342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.337607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.337618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.337628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.341870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.350897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.351466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.351508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.351530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.352044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.352404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.352422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.352435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.358672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.366129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.366696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.366760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.367280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.367546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.367561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.367570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.371816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.380829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.381395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.381437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.381458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.382036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.382357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.382369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.382378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.386626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.395399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.395958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.395999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.396021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.396484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.396749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.396761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.396770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.401010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.410030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.612 [2024-07-15 12:59:25.410569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 12:59:25.410611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.612 [2024-07-15 12:59:25.410633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.612 [2024-07-15 12:59:25.411212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.612 [2024-07-15 12:59:25.411498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.612 [2024-07-15 12:59:25.411510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.612 [2024-07-15 12:59:25.411519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.612 [2024-07-15 12:59:25.415769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.612 [2024-07-15 12:59:25.424791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.425359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.425400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.425422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.426000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.426381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.426394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.426403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.430649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.439412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.439964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.439984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.439994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.440266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.440531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.440542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.440551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.444804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.454087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.454624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.454666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.454687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.455239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.455512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.455524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.455533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.459786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.468812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.469341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.469383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.469412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.469992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.470442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.470455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.470465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.474712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.483489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.484041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.484062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.484072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.484343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.484609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.484621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.484630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.488878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.498147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.498702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.498754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.498776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.499321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.499586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.499598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.499607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.503850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.512880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.513433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.513454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.513464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.513728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.513993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.514008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.514018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.518269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.527544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.528107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.528149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.528171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.528671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.528942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.528953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.528963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.533213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.613 [2024-07-15 12:59:25.542276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.613 [2024-07-15 12:59:25.542766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 12:59:25.542809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.613 [2024-07-15 12:59:25.542830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.613 [2024-07-15 12:59:25.543423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.613 [2024-07-15 12:59:25.543709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.613 [2024-07-15 12:59:25.543720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.613 [2024-07-15 12:59:25.543730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.613 [2024-07-15 12:59:25.548198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.873 [2024-07-15 12:59:25.556990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.873 [2024-07-15 12:59:25.557545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.873 [2024-07-15 12:59:25.557589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.873 [2024-07-15 12:59:25.557612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.873 [2024-07-15 12:59:25.558193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.873 [2024-07-15 12:59:25.558770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.873 [2024-07-15 12:59:25.558782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.873 [2024-07-15 12:59:25.558792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.873 [2024-07-15 12:59:25.563040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.873 [2024-07-15 12:59:25.571581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.873 [2024-07-15 12:59:25.572116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.572156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.572179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.572768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.573034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.573046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.573055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.577312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.586341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.586740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.586761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.586772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.587036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.587309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.587321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.587330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.591579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.601132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.601611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.601632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.601642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.601906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.602170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.602181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.602190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.606447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.615735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.616186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.616207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.616217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.616491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.616757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.616768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.616777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.621028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.630311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.630789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.630810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.630820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.631084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.631356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.631369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.631378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.635627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.644925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.645430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.645474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.645495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.645817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.646082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.646093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.646103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.652278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.660123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.660577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.660598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.660608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.660873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.661138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.661150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.661164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.665418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.674708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.675221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.675273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.675297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.675790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.676055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.676067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.676076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.680329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 [2024-07-15 12:59:25.689365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.874 [2024-07-15 12:59:25.689904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.874 [2024-07-15 12:59:25.689946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.874 [2024-07-15 12:59:25.689967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.874 [2024-07-15 12:59:25.690559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.874 [2024-07-15 12:59:25.691061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.874 [2024-07-15 12:59:25.691073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.874 [2024-07-15 12:59:25.691083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.874 [2024-07-15 12:59:25.695336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4109812 Killed "${NVMF_APP[@]}" "$@" 00:29:33.874 12:59:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:33.874 12:59:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.874 12:59:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.874 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.875 [2024-07-15 12:59:25.704153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.704680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.704701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.704712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.704976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4111290 00:29:33.875 [2024-07-15 12:59:25.705242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.705266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.705275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4111290 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 4111290 ']' 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.875 12:59:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.875 [2024-07-15 12:59:25.709530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.718823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.719353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.719375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.719385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.719648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.719914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.719925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.719935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.724188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.733487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.733949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.733970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.733980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.734243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.734516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.734528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.734537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.738786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.748081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.748633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.748655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.748669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.748932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.749196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.749208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.749217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.753470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.756892] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:33.875 [2024-07-15 12:59:25.756953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.875 [2024-07-15 12:59:25.762948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.763504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.763525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.763535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.763799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.764063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.764074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.764084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.768447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.777741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.778276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.778298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.778309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.778573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.778839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.778850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.778860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.783114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.792403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.792831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.792853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.792868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.793133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.793405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.793418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.793427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.875 [2024-07-15 12:59:25.797670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.875 [2024-07-15 12:59:25.806960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.875 [2024-07-15 12:59:25.807515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.875 [2024-07-15 12:59:25.807537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:33.875 [2024-07-15 12:59:25.807547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:33.875 [2024-07-15 12:59:25.807812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:33.875 [2024-07-15 12:59:25.808076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.875 [2024-07-15 12:59:25.808088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.875 [2024-07-15 12:59:25.808098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.875 [2024-07-15 12:59:25.812354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.821642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.822096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.822117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.822127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.822400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.822665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.822676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.822686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.826935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.836223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.836783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.836805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.836815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.837079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.837349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.837366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.837376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.841630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.846012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.135 [2024-07-15 12:59:25.850938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.851495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.851517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.851528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.851793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.852060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.852072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.852081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.856346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.865641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.866115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.866137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.866147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.866417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.866684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.866696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.866706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.870958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.880244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.880803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.880824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.880834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.881099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.881373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.881387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.881396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.885646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.894926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.895503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.895524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.895534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.895799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.896064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.896076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.896085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.900348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.909637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.910205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.910228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.910239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.910511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.910779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.910791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.910802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.915058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.924352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.924866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.924887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.924898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.135 [2024-07-15 12:59:25.925162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.135 [2024-07-15 12:59:25.925436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.135 [2024-07-15 12:59:25.925448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.135 [2024-07-15 12:59:25.925457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.135 [2024-07-15 12:59:25.929710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.135 [2024-07-15 12:59:25.938987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.135 [2024-07-15 12:59:25.939552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.135 [2024-07-15 12:59:25.939573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.135 [2024-07-15 12:59:25.939589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:25.939853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:25.940118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:25.940130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:25.940139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:25.944404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:25.952898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.136 [2024-07-15 12:59:25.952937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.136 [2024-07-15 12:59:25.952949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.136 [2024-07-15 12:59:25.952960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.136 [2024-07-15 12:59:25.952970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.136 [2024-07-15 12:59:25.953034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.136 [2024-07-15 12:59:25.953148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.136 [2024-07-15 12:59:25.953150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.136 [2024-07-15 12:59:25.953721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:25.954212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:25.954233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:25.954244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:25.954515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:25.954781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:25.954793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:25.954802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:25.959060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:25.968362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:25.968915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:25.968941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:25.968954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:25.969219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:25.969493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:25.969505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:25.969515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:25.973780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:25.983089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:25.983516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:25.983540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:25.983551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:25.983814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:25.984079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:25.984091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:25.984101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:25.988369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:25.997664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:25.998075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:25.998098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:25.998108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:25.998379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:25.998646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:25.998657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:25.998667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:26.002916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:26.012462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:26.012940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:26.012964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:26.012975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:26.013239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:26.013513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:26.013524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:26.013534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:26.017784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:26.027070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:26.027605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:26.027626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:26.027641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:26.027904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:26.028167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:26.028177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:26.028187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:26.032445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:26.041736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:26.042213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:26.042232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:26.042242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:26.042522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:26.042786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:26.042797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:26.042806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:26.047056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:26.056345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:26.056734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:26.056754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:26.056763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:26.057027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:26.057296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:26.057306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:26.057315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.136 [2024-07-15 12:59:26.061562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.136 [2024-07-15 12:59:26.071098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.136 [2024-07-15 12:59:26.071489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.136 [2024-07-15 12:59:26.071510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.136 [2024-07-15 12:59:26.071520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.136 [2024-07-15 12:59:26.071785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.136 [2024-07-15 12:59:26.072049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.136 [2024-07-15 12:59:26.072063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.136 [2024-07-15 12:59:26.072073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.395 [2024-07-15 12:59:26.076329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.395 [2024-07-15 12:59:26.085853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.395 [2024-07-15 12:59:26.086324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.395 [2024-07-15 12:59:26.086344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.395 [2024-07-15 12:59:26.086354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.395 [2024-07-15 12:59:26.086618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.395 [2024-07-15 12:59:26.086881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.086891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.086901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.091153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.100434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.100896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.100916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.100927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.101190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.101459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.101470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.101479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.105729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.115006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.115482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.115502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.115513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.115776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.116039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.116049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.116060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.120311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.129588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.130132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.130152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.130163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.130432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.130697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.130707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.130716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.134960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.144239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.144774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.144794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.144804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.145069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.145337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.145348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.145357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.149601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.158876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.159400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.159421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.159432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.159696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.159959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.159970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.159979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.164224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.173500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.174051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.174082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.174355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.174619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.174629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.174639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.178885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.188161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.188713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.188733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.188744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.189007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.189277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.189289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.189299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.193550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.202819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.203342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.203362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.203372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.203635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.203899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.203909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.203918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.208162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.217438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.217990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.218009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.218020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.218289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.218553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.218563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.218576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.222826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.232106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.232659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.232679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.232689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.396 [2024-07-15 12:59:26.232953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.396 [2024-07-15 12:59:26.233216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.396 [2024-07-15 12:59:26.233226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.396 [2024-07-15 12:59:26.233235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.396 [2024-07-15 12:59:26.237487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.396 [2024-07-15 12:59:26.246770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.396 [2024-07-15 12:59:26.247323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.396 [2024-07-15 12:59:26.247344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.396 [2024-07-15 12:59:26.247354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.247617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.247880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.247890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.247899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.252148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.261424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.397 [2024-07-15 12:59:26.261972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.397 [2024-07-15 12:59:26.261992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.397 [2024-07-15 12:59:26.262002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.262272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.262536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.262547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.262556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.266798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.276069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.397 [2024-07-15 12:59:26.276531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.397 [2024-07-15 12:59:26.276551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.397 [2024-07-15 12:59:26.276561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.276825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.277088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.277098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.277108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.281351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.290626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.397 [2024-07-15 12:59:26.291178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.397 [2024-07-15 12:59:26.291198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.397 [2024-07-15 12:59:26.291208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.291478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.291742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.291753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.291762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.296005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.305288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.397 [2024-07-15 12:59:26.305834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.397 [2024-07-15 12:59:26.305854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.397 [2024-07-15 12:59:26.305864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.306128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.306396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.306410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.306419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.310661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.319930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.397 [2024-07-15 12:59:26.320481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.397 [2024-07-15 12:59:26.320501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.397 [2024-07-15 12:59:26.320512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.397 [2024-07-15 12:59:26.320779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.397 [2024-07-15 12:59:26.321043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.397 [2024-07-15 12:59:26.321053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.397 [2024-07-15 12:59:26.321062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.397 [2024-07-15 12:59:26.325313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.397 [2024-07-15 12:59:26.334593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.335124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.335145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.335155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.335426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.335690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.335700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.335710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.339954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.349239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.349787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.349807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.349817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.350082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.350351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.350362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.350371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.354613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.363879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.364363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.364386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.364396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.364661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.364925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.364935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.364948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.369194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.378472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.379012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.379033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.379042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.379314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.379580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.379590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.379600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.383840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.393111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.393669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.393689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.393700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.393964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.394227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.394237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.394247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.398492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.407776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.408324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.408344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.408354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.408618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.408881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.408892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.408903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.413151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.422428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.657 [2024-07-15 12:59:26.422983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.657 [2024-07-15 12:59:26.423006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.657 [2024-07-15 12:59:26.423017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.657 [2024-07-15 12:59:26.423287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.657 [2024-07-15 12:59:26.423550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.657 [2024-07-15 12:59:26.423560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.657 [2024-07-15 12:59:26.423570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.657 [2024-07-15 12:59:26.427817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.657 [2024-07-15 12:59:26.437091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.437647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.437667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.437677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.437942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.438205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.438215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.438225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.442482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.451753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.452197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.452216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.452226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.452497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.452761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.452771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.452781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.457031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.466307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.466845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.466865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.466877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.467141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.478929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.478946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.478958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.478977] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:34.658 [2024-07-15 12:59:26.484834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.495026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.495660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.495686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.495699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.496041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.496390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.496405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.496417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.500787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.509814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.510338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.510359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.510370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.510635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.510900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.510912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.510921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.515169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.524443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.524881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.524901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.524911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.525176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.525447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.525459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.525472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.529759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.539047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.539524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.539545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.539555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.539819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.540083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.540094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.540103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.544361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.553641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.554200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.554222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.554232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.554506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.554771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.554783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.554792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.559037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.568322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.568785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.568806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.568816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.569080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.569350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.569362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.569371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.658 [2024-07-15 12:59:26.573622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.658 [2024-07-15 12:59:26.582901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.658 [2024-07-15 12:59:26.583451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.658 [2024-07-15 12:59:26.583477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.658 [2024-07-15 12:59:26.583488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.658 [2024-07-15 12:59:26.583752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.658 [2024-07-15 12:59:26.584016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.658 [2024-07-15 12:59:26.584027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.658 [2024-07-15 12:59:26.584036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.659 [2024-07-15 12:59:26.588286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.597635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.598191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.598213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.598224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.598494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.598760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.598772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.598781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.603031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.612343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.612899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.612919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.612930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.613195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.613466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.613479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.613488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.617735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.627020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.627576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.627597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.627607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.627871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.628140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.628152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.628162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.632417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.641702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.642181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.642201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.642211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.642489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.642754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.642766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.642775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.647025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.656310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.656862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.656883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.656893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.657158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.657430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.657443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.657452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.661693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.670979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.671343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.671363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.671373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.671637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.671903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.671914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.671923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.676175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.685706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.686229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.686250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.686266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.686531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.918 [2024-07-15 12:59:26.686795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.918 [2024-07-15 12:59:26.686806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.918 [2024-07-15 12:59:26.686816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.918 [2024-07-15 12:59:26.691063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.918 [2024-07-15 12:59:26.700347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.918 [2024-07-15 12:59:26.700897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.918 [2024-07-15 12:59:26.700917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.918 [2024-07-15 12:59:26.700926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.918 [2024-07-15 12:59:26.701190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.701459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.701472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.701481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 [2024-07-15 12:59:26.705724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.715017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.715547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.715568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.715578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.715841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.716105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.716117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.716126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 [2024-07-15 12:59:26.720382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 [2024-07-15 12:59:26.729723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.730189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.730211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.730221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.730492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.730758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.730770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.730779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 [2024-07-15 12:59:26.735027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.919 [2024-07-15 12:59:26.744316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.744773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.744795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.744805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.745068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.745339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.745352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.745362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 [2024-07-15 12:59:26.749611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 [2024-07-15 12:59:26.749691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.919 [2024-07-15 12:59:26.758888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.759439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.759461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.759470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.759735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.759999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.760011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.760020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.764281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 [2024-07-15 12:59:26.773564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.774094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.774114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.774124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.774394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.774658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.774669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.774679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 [2024-07-15 12:59:26.778933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 [2024-07-15 12:59:26.788207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.788754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.788777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.788787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.789051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.789322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.789334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.789344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 Malloc0 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.919 [2024-07-15 12:59:26.793592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.803016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 [2024-07-15 12:59:26.803556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.919 [2024-07-15 12:59:26.803578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6e90 with addr=10.0.0.2, port=4420 00:29:34.919 [2024-07-15 12:59:26.803588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6e90 is same with the state(5) to be set 00:29:34.919 [2024-07-15 12:59:26.803852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6e90 (9): Bad file descriptor 00:29:34.919 [2024-07-15 12:59:26.804117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.919 [2024-07-15 12:59:26.804133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.919 [2024-07-15 12:59:26.804142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.808395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.919 [2024-07-15 12:59:26.816684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.919 [2024-07-15 12:59:26.817678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.919 12:59:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4110261 00:29:35.178 [2024-07-15 12:59:26.860404] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.155 00:29:45.155 Latency(us) 00:29:45.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.155 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.155 Verification LBA range: start 0x0 length 0x4000 00:29:45.155 Nvme1n1 : 15.03 3154.57 12.32 8417.94 0.00 11029.19 1318.17 38130.04 00:29:45.155 =================================================================================================================== 00:29:45.155 Total : 3154.57 12.32 8417.94 0.00 11029.19 1318.17 38130.04 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.155 rmmod nvme_tcp 00:29:45.155 rmmod nvme_fabrics 00:29:45.155 rmmod nvme_keyring 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4111290 ']' 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4111290 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 4111290 ']' 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 4111290 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4111290 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4111290' 00:29:45.155 killing process with pid 4111290 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 4111290 00:29:45.155 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 4111290 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.156 12:59:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.091 12:59:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:46.091 00:29:46.091 real 0m26.919s 00:29:46.091 user 1m4.744s 00:29:46.091 sys 0m6.410s 00:29:46.091 12:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.350 12:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.350 ************************************ 00:29:46.350 END TEST nvmf_bdevperf 00:29:46.350 ************************************ 00:29:46.350 12:59:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:46.350 12:59:38 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:46.350 12:59:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:46.350 12:59:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.350 12:59:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.350 ************************************ 00:29:46.350 START TEST nvmf_target_disconnect 00:29:46.350 ************************************ 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:46.350 * Looking for test storage... 00:29:46.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:46.350 12:59:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:52.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:52.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.919 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:52.920 Found net devices under 0000:af:00.0: cvl_0_0 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:52.920 Found net devices under 0000:af:00.1: cvl_0_1 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:29:52.920 00:29:52.920 --- 10.0.0.2 ping statistics --- 00:29:52.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.920 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:52.920 12:59:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:29:52.920 00:29:52.920 --- 10.0.0.1 ping statistics --- 00:29:52.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.920 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.920 ************************************ 00:29:52.920 START TEST nvmf_target_disconnect_tc1 00:29:52.920 ************************************ 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.920 [2024-07-15 12:59:44.199693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.920 [2024-07-15 12:59:44.199746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dcf0 with addr=10.0.0.2, port=4420 00:29:52.920 [2024-07-15 12:59:44.199775] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:52.920 [2024-07-15 12:59:44.199793] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.920 [2024-07-15 12:59:44.199802] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:52.920 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:52.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:52.920 Initializing NVMe Controllers 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:52.920 00:29:52.920 real 0m0.128s 00:29:52.920 user 0m0.054s 00:29:52.920 sys 0m0.073s 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:52.920 ************************************ 00:29:52.920 END TEST nvmf_target_disconnect_tc1 00:29:52.920 ************************************ 00:29:52.920 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.921 ************************************ 00:29:52.921 START TEST nvmf_target_disconnect_tc2 00:29:52.921 ************************************ 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4116767 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4116767 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4116767 ']' 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.921 12:59:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.921 [2024-07-15 12:59:44.349381] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:52.921 [2024-07-15 12:59:44.349437] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.921 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.921 [2024-07-15 12:59:44.467546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.921 [2024-07-15 12:59:44.615425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.921 [2024-07-15 12:59:44.615490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.921 [2024-07-15 12:59:44.615517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.921 [2024-07-15 12:59:44.615535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.921 [2024-07-15 12:59:44.615550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.921 [2024-07-15 12:59:44.615686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.921 [2024-07-15 12:59:44.615797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.921 [2024-07-15 12:59:44.615912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:52.921 [2024-07-15 12:59:44.615917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.489 Malloc0 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.489 [2024-07-15 12:59:45.290146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.489 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.490 [2024-07-15 12:59:45.322695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4116849 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.490 12:59:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.064 12:59:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4116767 00:29:56.064 12:59:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 [2024-07-15 12:59:47.357341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 [2024-07-15 12:59:47.357647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Read completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.064 Write completed with error (sct=0, sc=8) 00:29:56.064 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 [2024-07-15 12:59:47.358242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Read completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 Write completed with error (sct=0, sc=8) 00:29:56.065 starting I/O failed 00:29:56.065 [2024-07-15 12:59:47.358656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.065 [2024-07-15 12:59:47.358833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.358879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.359974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.359988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.360970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.360984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.361953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.361985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.362948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.362964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.363813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.363995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.364009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.364163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.364177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.364272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.364286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.364427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.364441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-15 12:59:47.364527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-15 12:59:47.364542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.364714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.364728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.364917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.364932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.365850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.365971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.366001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.366205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.366235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.366468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.366499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.366716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.366730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.366904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.366919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.367034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.367048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.367161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.367176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.367396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.367427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.367586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.367617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.367840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.367870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.368062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.368092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.368228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.368267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.368519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.368550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.368738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.368769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.368963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.368994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.369129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.369159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.369304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.369336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.369543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.369575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.369703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.369733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.369934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.369965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.370214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.370245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.370459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.370489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.370758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.370795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.371069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.371280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.371299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.371483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.371500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.371722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.371740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.371875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.372060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.372077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.372266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.372285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.372403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-15 12:59:47.372420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-15 12:59:47.372592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.372609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.372702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.372723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.372892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.372908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.373939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.373955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.374117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.374133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.374429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.374447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.374705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.374721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.374832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.374849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.375939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.375957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.376084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.376103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.376291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.376310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.376433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.376450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.376622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.376640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.376819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.377004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.377032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.377158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.377193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.377398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.377428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.377557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.377585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.377874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.377904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.378102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.378131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.378271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.378300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.378507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.378538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.378829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.378859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.379137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.379166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.379473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.379503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.379691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.379721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.379847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.379876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.380090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.380119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.380249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.380287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.380519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-15 12:59:47.380548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-15 12:59:47.380671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.380699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.380833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.380850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.381069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.381098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.381215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.381244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.381457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.381487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.381742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.381771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.381899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.381929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.382111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.382355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.382385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.382574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.382602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.382787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.382817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.383009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.383039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.383331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.383362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.383572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.383601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.383797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.383815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.383988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.384005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.384273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.384303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.384508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.384537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.384673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.384701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.384842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.384859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.385029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.385063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.385182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.385210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.385500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.385531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.385747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.385776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.385990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.386119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.386303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.386484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.386667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.386891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.386909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.387030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.387047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.387270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.387299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.068 [2024-07-15 12:59:47.387493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-15 12:59:47.387522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.068 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.387774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.387803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.388913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.388932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.389970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.389988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.390150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.390169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.390341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.390359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.390455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.390473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.390637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.390654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.390842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.390859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.391980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.391998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.392139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.392304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.392460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.392768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.392894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.392992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.393144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.393265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.393456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.393674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.393899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.393917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.394114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.394144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.394308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-15 12:59:47.394340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.069 qpair failed and we were unable to recover it. 00:29:56.069 [2024-07-15 12:59:47.394475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.394504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.394620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.394648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.394764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.394791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.394980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.394997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.395968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.395996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.396136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.396166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.396353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.396384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.396511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.396540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.396697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.396725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.396860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.396890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.397970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.397987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.398922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.398941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.399878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.399983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.070 [2024-07-15 12:59:47.400826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.070 qpair failed and we were unable to recover it. 00:29:56.070 [2024-07-15 12:59:47.400924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.400941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.401916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.401934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.402119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.402136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.402318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.402337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.402499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.402517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.402642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.402659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.402838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.402866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.403093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.403121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.403241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.403276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.403409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.403440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.403661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.403689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.403874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.403891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.404981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.404999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.405816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.405834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.406795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.407128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.407389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.407575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.071 [2024-07-15 12:59:47.407686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.071 qpair failed and we were unable to recover it. 00:29:56.071 [2024-07-15 12:59:47.407891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.407910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.408943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.408960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.409901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.409919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.410009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.410030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.410193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.410211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.410414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.410433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.410605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.410638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.410835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.410864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.411073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.411102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.411302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.411588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.411606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.411780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.411798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.411967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.411995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.412199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.412228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.412380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.412409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.412667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.412695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.412977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.413227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.413364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.413641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.413782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.413907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.413927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.414087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.414105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.414216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.414233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.414430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.414448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.072 [2024-07-15 12:59:47.414608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.072 [2024-07-15 12:59:47.414626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.072 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.414801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.414819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.415004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.415022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.415140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.415171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.415466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.415498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.415714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.415742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.415939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.415957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.416189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.416206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.416442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.416460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.416698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.416715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.416952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.416970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.417924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.417942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.418907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.418925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.419853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.419870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.420062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.420263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.420380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.420574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.420825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.420984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.421003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.421202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.421231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.421466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.421502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.421628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.421657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.421937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.421967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.422186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.422216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.073 [2024-07-15 12:59:47.422456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-15 12:59:47.422487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.073 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.422675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.422705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.422897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.422915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.423939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.423957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.424971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.424988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.425981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.425999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.426893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.426910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.427019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.427036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.427243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.427270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.427455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.427473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.427636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.427654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.427844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.427873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.428074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.428103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.428243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.428285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.428412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.428441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.428577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.428604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.428725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.428755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.429041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-15 12:59:47.429071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.074 qpair failed and we were unable to recover it. 00:29:56.074 [2024-07-15 12:59:47.429187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.429216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.429373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.429403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.429688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.429718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.429922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.429951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.430163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.430181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.430353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.430371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.430459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.430480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.430660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.430677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.430874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.430892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.431824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.431842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.432076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.432237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.432399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.432568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.432853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.432971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.433187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.433402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.433655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.433766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.433959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.433977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.434138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.434156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.434420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.434451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.434579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.434608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.434893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.434923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.435913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.435934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.436096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.436113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.436294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.436311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.436480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.436510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.436700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.436729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.436856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.075 [2024-07-15 12:59:47.436884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.075 qpair failed and we were unable to recover it. 00:29:56.075 [2024-07-15 12:59:47.437001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.437030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.437145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.437161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.437428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.437447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.437639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.437667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.437799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.437827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.438030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.438061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.438263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.438282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.438444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.438462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.438591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.438608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.438864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.438882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.439062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.439080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.439186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.439203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.439452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.439522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.439688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.439721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.439989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.440171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.440282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.440479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.440930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.440960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.441159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.441188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.441384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.441415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.441677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.441706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.441988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.442017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.442199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.442216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.442424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.442454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.442751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.442780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.442909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.442938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.443063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.443080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.443278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.443297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.443466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.443484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.443683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.443712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.443989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.444018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.444248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.444285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.444426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.444465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.444659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.444689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.444876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.444904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.445095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.445123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.076 [2024-07-15 12:59:47.445432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.076 [2024-07-15 12:59:47.445451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.076 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.445664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.445681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.445855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.445872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.446135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.446164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.446366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.446396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.446593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.446876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.446905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.447164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.447193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.447383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.447414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.447668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.447917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.447936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.448871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.448890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.449011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.449028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.449231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.449249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.449512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.449530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.449715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.449732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.450024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.450053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.450250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.450286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.450410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.450441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.450708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.450737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.451019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.451038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.451200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.451218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.451378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.451396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.451565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.451603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.451862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.451891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.452934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.452951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.453119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.453139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.077 [2024-07-15 12:59:47.453303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.077 [2024-07-15 12:59:47.453322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.077 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.453431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.453448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.453711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.453728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.453825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.453842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.454013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.454032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.454230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.454270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.454453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.454482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.454703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.454875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.454892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.455100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.455130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.455320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.455350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.455470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.455498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.455699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.455727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.455878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.456085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.456115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.456302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.456332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.456466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.456495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.456777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.456807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.457006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.457035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.457225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.457265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.457469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.457499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.457631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.457661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.457797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.457826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.458040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.458071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.458277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.458308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.458601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.458631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.458835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.458865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.459122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.459151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.459430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.459448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.459637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.459654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.459830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.459868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.460065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.460094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.460276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.460395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.460425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.460630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.460660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.460965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.460995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.461191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.461209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.461477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.461495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.461728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.461745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.461848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.461868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.462100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.462119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.078 qpair failed and we were unable to recover it. 00:29:56.078 [2024-07-15 12:59:47.462282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.078 [2024-07-15 12:59:47.462301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.462462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.462479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.462730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.462759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.462878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.462906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.463046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.463075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.463289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.463319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.463503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.463532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.463733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.463762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.464036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.464214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.464401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.464591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.464990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.465008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.465176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.465194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.465302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.465320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.465523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.465541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.465720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.465737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.466027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.466056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.466316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.466347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.466572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.466821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.466850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.467105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.467135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.467365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.467384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.467547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.467564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.467729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.467945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.467963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.468896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.468914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.469111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.469128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.469365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.469396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.469595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.469623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.469812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.469840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.469985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.470004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.470093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.470111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.470322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.470341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.470454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.470471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.079 qpair failed and we were unable to recover it. 00:29:56.079 [2024-07-15 12:59:47.470704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.079 [2024-07-15 12:59:47.470722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.470950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.470967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.471229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.471247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.471516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.471535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.471637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.471654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.471919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.471949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.472215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.472245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.472529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.472558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.472764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.472793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.473047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.473076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.473280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.473299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.473476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.473494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.473684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.473715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.473838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.473866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.474148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.474178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.474465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.474496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.474786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.475012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.475030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.475216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.475245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.475534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.475564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.475772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.475802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.476056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.476086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.476275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.476490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.476507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.476609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.476629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.476897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.476916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.477147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.477165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.477334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.477352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.477532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.477550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.477723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.477740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.477912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.477929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.478112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.478131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.478368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.478399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.478620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.478650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.478846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.478875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.479066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.479096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.479306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.479336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.479635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.479665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.479807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.479836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.480106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.080 [2024-07-15 12:59:47.480124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.080 qpair failed and we were unable to recover it. 00:29:56.080 [2024-07-15 12:59:47.480384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.480402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.480595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.480613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.480809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.480827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.480935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.480952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.481072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.481090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.481281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.481300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.481406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.481424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.481542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.481561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.481738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.481755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.482014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.482044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.482181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.482210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.482351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.482381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.482586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.482616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.482827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.482856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.483058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.483075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.483259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.483277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.483450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.483478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.483784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.483814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.484069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.484099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.484249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.484289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.484503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.484532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.484749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.484778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.485034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.485065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.485276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.485306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.485507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.485764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.485794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.485908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.485935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.486138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.486167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.486425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.486455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.486732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.486761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.486945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.486974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.487283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.487313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.487586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.487775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.487793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.081 [2024-07-15 12:59:47.487992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.081 [2024-07-15 12:59:47.488010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.081 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.488172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.488189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.488349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.488368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.488471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.488488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.488658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.488676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.488837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.488854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.489084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.489126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.489355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.489386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.489582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.489611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.489828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.489857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.490924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.490942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.491133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.491172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.491488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.491519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.491661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.491690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.492001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.492325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.492343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.492473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.492490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.492670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.492688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.492809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.492826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.493002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.493019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.493213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.493242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.493590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.493621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.493828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.493857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.494141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.494170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.494398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.494429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.494701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.494736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.494947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.494977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.495183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.495213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.495527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.495557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.495787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.495816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.496065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.496083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.496244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.496267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.496471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.496500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.496702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.496730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.496927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.496957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.497183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.082 [2024-07-15 12:59:47.497212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.082 qpair failed and we were unable to recover it. 00:29:56.082 [2024-07-15 12:59:47.497469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.497487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.497729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.497746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.497907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.497924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.498912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.498929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.499095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.499135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.499323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.499354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.499607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.499636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.499832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.499861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.500062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.500092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.500272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.500291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.500401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.500418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.500626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.500644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.500768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.501040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.501079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.501286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.501316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.501520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.501548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.501737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.501765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.501965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.501993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.502181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.502211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.502507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.502538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.502846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.502875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.503970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.503987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.504241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.504264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.504541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.504559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.504812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.504830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.504926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.504943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.505150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.505168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.505327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.505346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.505589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.505606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.505794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.083 [2024-07-15 12:59:47.505811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.083 qpair failed and we were unable to recover it. 00:29:56.083 [2024-07-15 12:59:47.505994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.506129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.506392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.506616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.506799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.506918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.506935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.507112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.507130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.507299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.507317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.507531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.507560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.507869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.508028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.508057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.508196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.508214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.508428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.508457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.508653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.508682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.508938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.508968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.509100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.509128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.509342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.509361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.509528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.509567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.509821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.509851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.510036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.510066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.510185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.510213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.510418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.510448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.510706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.510735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.511016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.511046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.511234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.511285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.511559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.511577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.511763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.511781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.512011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.512028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.512207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.512235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.512533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.512571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.512856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.512885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.513078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.513107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.513300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.513331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.513587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.513604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.513723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.513740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.513914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.513932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.514106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.514136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.514434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.514452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.514709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.514727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.514918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.514935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.084 qpair failed and we were unable to recover it. 00:29:56.084 [2024-07-15 12:59:47.515044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.084 [2024-07-15 12:59:47.515061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.515171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.515189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.515400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.515418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.515542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.515559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.515721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.515738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.515898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.515915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.516091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.516119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.516424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.516454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.516710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.516739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.516926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.516955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.517084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.517112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.517331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.517371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.517640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.517658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.517817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.517835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.518789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.518807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.519102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.519120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.519296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.519314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.519498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.519515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.519704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.519721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.519891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.519921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.520208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.520237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.520361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.520391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.520622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.520640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.520737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.520755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.521016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.521036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.521156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.521174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.521348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.521601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.521631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.521850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.521879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.522162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.085 [2024-07-15 12:59:47.522191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.085 qpair failed and we were unable to recover it. 00:29:56.085 [2024-07-15 12:59:47.522334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.522365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.522493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.522522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.522787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.522817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.523007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.523036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.523300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.523331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.523462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.523480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.523654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.523672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.523920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.523949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.524093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.524123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.524436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.524467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.524591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.524621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.524759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.524789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.525002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.525032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.525220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.525249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.525442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.525460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.525698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.525728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.525917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.525946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.526218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.526247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.526375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.526418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.526548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.526565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.526757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.526775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.526948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.526966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.527223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.527241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.527437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.527455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.527560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.527578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.527830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.527847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.528045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.528063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.528176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.528194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.528356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.528398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.528680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.528710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.528975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.529004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.529189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.529219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.529496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.529526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.529669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.529698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.529899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.529933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.530177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.530194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.530391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.530410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.530585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.530615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.530884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.530914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.531145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.086 [2024-07-15 12:59:47.531174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.086 qpair failed and we were unable to recover it. 00:29:56.086 [2024-07-15 12:59:47.531316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.531334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.531564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.531582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.531751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.531769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.532035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.532065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.532184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.532218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.532498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.532516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.532703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.532920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.532937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.533118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.533148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.533351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.533382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.533576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.533721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.533751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.533954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.533983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.534112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.534142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.534449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.534467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.534701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.534720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.534827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.534845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.535953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.535969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.536059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.536077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.536163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.536179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.536437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.536555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.536572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.536775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.536793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.537049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.537067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.537222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.537239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.537426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.537456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.537652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.537682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.537817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.537846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.538127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.538156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.538280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.538314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.538509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.538537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.538666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.538694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.538846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.538876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.539061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.539089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.539274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.539292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.539581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.087 [2024-07-15 12:59:47.539611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.087 qpair failed and we were unable to recover it. 00:29:56.087 [2024-07-15 12:59:47.539752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.539781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.539987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.540016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.540224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.540263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.540472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.540502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.540777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.540795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.541963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.541991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.542130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.542158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.542368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.542400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.542538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.542567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.542787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.542817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.542943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.542971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.543276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.543307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.543509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.543527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.543756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.543774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.543881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.543899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.544070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.544088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.544251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.544276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.544445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.544462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.544713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.544730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.544986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.545003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.545107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.545124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.545462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.545495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.545643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.545672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.545946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.545976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.546252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.546291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.546574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.546592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.546773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.546790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.546969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.546987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.547149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.547192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.547534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.547564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.547842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.547871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.548081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.548110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.548282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.548313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.548516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.548546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.548799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.548829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.548945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.088 [2024-07-15 12:59:47.548975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.088 qpair failed and we were unable to recover it. 00:29:56.088 [2024-07-15 12:59:47.549284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.549314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.549592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.549610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.549809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.549828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.550004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.550022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.550289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.550319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.550540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.550569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.550782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.550811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.551021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.551051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.551279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.551310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.551506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.551523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.551756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.551773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.551934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.551951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.552058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.552077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.552306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.552324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.552536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.552554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.552644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.552661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.552833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.552851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.553101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.553119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.553229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.553247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.553344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.553362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.553591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.553610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.553870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.554008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.554026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.554267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.554286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.554460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.554477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.554736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.554754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.554846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.554863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.555953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.555973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.556173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.556191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.556422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.556441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.089 [2024-07-15 12:59:47.556603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.089 [2024-07-15 12:59:47.556621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.089 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.556804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.556833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.557978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.557995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.558100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.558117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.558372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.558390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.558590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.558607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.558836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.558854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.559086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.559103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.559350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.559368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.559459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.559476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.559655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.559673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.559850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.559867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.560131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.560158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.560347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.560376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.560515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.560544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.560661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.560690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.560824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.560852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.561105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.561135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.561325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.561355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.561642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.561672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.561870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.561899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.562168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.562198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.562426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.562551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.562673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.562690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.562877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.562895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.563128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.563146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.563262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.563280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.563510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.563527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.563646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.563665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.090 [2024-07-15 12:59:47.563885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.090 [2024-07-15 12:59:47.563914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.090 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.564101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.564130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.564401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.564422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.564679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.564697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.564903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.564921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.565104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.565121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.565243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.565266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.565434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.565452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.565553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.565570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.565746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.565764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.566830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.566847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.567107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.567125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.567250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.567273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.567438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.567456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.567631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.567648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.567827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.567844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.568019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.568037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.568151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.568169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.568348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.568366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.568611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.568641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.568844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.568873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.569131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.569161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.569392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.569410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.569620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.569637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.569819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.569837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.570015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.570045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.570251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.570301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.570522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.570727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.570745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.570913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.570931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.571176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.571205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.571428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.571459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.571738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.571756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.571923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.571940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.572062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.572079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.572316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.572335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.091 qpair failed and we were unable to recover it. 00:29:56.091 [2024-07-15 12:59:47.572495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.091 [2024-07-15 12:59:47.572513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.572696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.572731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.572921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.572950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.573264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.573295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.573507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.573537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.573673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.573702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.573836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.573865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.574060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.574088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.574217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.574235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.574521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.574539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.574795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.574813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.574931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.574949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.575064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.575082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.575281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.575300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.575556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.575574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.575743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.575761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.575851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.575868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.576147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.576164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.576419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.576453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.576632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.576650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.576876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.576893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.576985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.577163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.577361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.577555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.577664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.577789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.577807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.578938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.578956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.579079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.579097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.579264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.579283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.579479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.579496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.579611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.579628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.579822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.579839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.580099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.580117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.580303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.580322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.580575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.092 [2024-07-15 12:59:47.580593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.092 qpair failed and we were unable to recover it. 00:29:56.092 [2024-07-15 12:59:47.580753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.580770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.580959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.580988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.581193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.581222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.581456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.581486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.581674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.581692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.581954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.582163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.582180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.582410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.582428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.582519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.582536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.582707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.582724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.582987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.583016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.583136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.583164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.583418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.583450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.583601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.583618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.583854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.583884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.584197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.584226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.584369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.584388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.584481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.584498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.584757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.584774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.584957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.584975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.585160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.585189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.585390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.585419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.585643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.585673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.585793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.585822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.586078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.586108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.586296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.586326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.586519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.586536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.586747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.586782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.587013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.587042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.587238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.587274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.587547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.587565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.587773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.588021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.588038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.588317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.588336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.588519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.588536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.588774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.588792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.588971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.588988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.589246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.589270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.589453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.589471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.093 [2024-07-15 12:59:47.589648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.093 [2024-07-15 12:59:47.589665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.093 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.589836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.589853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.590038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.590080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.590220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.590248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.590522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.590552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.590827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.590845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.591008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.591026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.591261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.591280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.591548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.591577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.591717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.591745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.591933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.591961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.592150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.592178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.592441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.592460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.592623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.592640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.592767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.592785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.592948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.592965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.593144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.593162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.593422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.593441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.593622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.593640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.593756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.593773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.594913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.594932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.595100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.595118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.595309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.595331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.595450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.595703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.595733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.595883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.595913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.596101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.596131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.596276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.596306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.596489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.596518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.094 [2024-07-15 12:59:47.596726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.094 [2024-07-15 12:59:47.596756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.094 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.597012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.597041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.597168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.597197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.597399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.597429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.597620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.597649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.597832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.597861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.598071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.598099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.598292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.598311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.598413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.598431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.598603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.598621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.598872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.598889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.599067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.599085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.599265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.599284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.599462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.599479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.599660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.599678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.599851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.599869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.600135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.600164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.600306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.600337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.600565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.600594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.600739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.600769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.600982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.601012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.601200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.601229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.601519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.601549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.601687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.601705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.601886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.601903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.602083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.602101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.602204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.602222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.602317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.602338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.602607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.602638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.602865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.602894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.603036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.603066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.603278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.603309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.603565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.603593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.603792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.603812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.603995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.604300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.604448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.604616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.604787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.604914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.604932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.605040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.095 [2024-07-15 12:59:47.605058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.095 qpair failed and we were unable to recover it. 00:29:56.095 [2024-07-15 12:59:47.605259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.605277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.605458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.605476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.605676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.605694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.605876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.605905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.606175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.606205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.606437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.606467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.606676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.606707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.607897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.607913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.608078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.608095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.608267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.608286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.608474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.608492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.608741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.608758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.608955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.608972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.609089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.609106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.609346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.609364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.609529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.609546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.609675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.609692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.609907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.609925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.610093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.610111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.610268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.610286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.610461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.610491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.610676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.610706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.610961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.610990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.611246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.611282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.611451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.611481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.611631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.611649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.611876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.611894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.612118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.612138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.612343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.612362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.612559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.612589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.612779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.612807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.613066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.613095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.613287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.613306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.613594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.613624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.613815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.613844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.614068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.096 [2024-07-15 12:59:47.614098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.096 qpair failed and we were unable to recover it. 00:29:56.096 [2024-07-15 12:59:47.614380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.614410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.614607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.614636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.614752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.614782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.614969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.614998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.615129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.615157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.615438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.615457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.615686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.615703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.615808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.615826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.616004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.616021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.616276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.616296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.616389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.616405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.616658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.616675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.616952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.616969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.617095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.617113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.617309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.617349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.617479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.617507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.617781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.617811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.617953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.617982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.618275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.618305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.618573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.618602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.618734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.618752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.618936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.618953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.619215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.619251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.619398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.619427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.619573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.619602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.619807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.619837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.619982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.619999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.620916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.620933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.621041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.621059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.621235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.621252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.621456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.621474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.621646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.621664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.621851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.621870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.622050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.622068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.622166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.622183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.622291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.097 [2024-07-15 12:59:47.622310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.097 qpair failed and we were unable to recover it. 00:29:56.097 [2024-07-15 12:59:47.622482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.622499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.622697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.622715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.622897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.622914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.623858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.623876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.624961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.624980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.625188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.625217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.625380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.625411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.625624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.625643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.625767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.625784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.625896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.625914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.626869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.626886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.627121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.627151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.627272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.627301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.627486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.627516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.627784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.627813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.098 [2024-07-15 12:59:47.628897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.098 [2024-07-15 12:59:47.628914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.098 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.629146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.629165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.629330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.629348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.629573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.629808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.629838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.630046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.630076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.630358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.630388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.630607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.630636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.630767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.630784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.630966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.630984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.631219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.631248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.631472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.631502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.631785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.631815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.631956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.631985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.632242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.632299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.632444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.632473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.632677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.632707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.632918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.632947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.633093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.633122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.633263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.633293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.633483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.633513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.633820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.633849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.633991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.634155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.634394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.634643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.634776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.634964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.634982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.635090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.635107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.635216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.635235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.635453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.635470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.635713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.635730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.635938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.635955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.636820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.636990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.637007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.637199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.637217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.099 qpair failed and we were unable to recover it. 00:29:56.099 [2024-07-15 12:59:47.637379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-07-15 12:59:47.637398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.637498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.637515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.637632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.637650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.637878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.637896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.638949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.638967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.639171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.639201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.639400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.639431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.639585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.639613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.639869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.639899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.640034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.640063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.640298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.640614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.640644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.640860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.640878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.640979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.640997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.641093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.641109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.641228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.641247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.641359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.641377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.641622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.641652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.641781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.641810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.642091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.642120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.642269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.642622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.642650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.642843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.642872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.643153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.643183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.643394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.643424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.643607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.643624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.643883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.643912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.644116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-07-15 12:59:47.644145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.100 qpair failed and we were unable to recover it. 00:29:56.100 [2024-07-15 12:59:47.644403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.644433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.644654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.644688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.644917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.644947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.645262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.645293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.645561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.645579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.645751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.645876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.645894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.646955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.646972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.647847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.647864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.648940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.648958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.649062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.649079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.649270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.649289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.649391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.649407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.649640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.649658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.649890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.649907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.650947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.650964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.651084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.651102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.651268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.651286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.101 qpair failed and we were unable to recover it. 00:29:56.101 [2024-07-15 12:59:47.651447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-07-15 12:59:47.651465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.651573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.651591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.651753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.651773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.651856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.651872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.652844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.652863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.653921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.653939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.654136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.654166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.654396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.654427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.654703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.654733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.654855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.654872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.654990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.655008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.655216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.655234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.655445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.655464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.655638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.655655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.655825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.655842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.656020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.656220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.656423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.656453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.656694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.656725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.656924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.656941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.657055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.657072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.102 [2024-07-15 12:59:47.657331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-07-15 12:59:47.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.102 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.657451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.657469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.657651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.657668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.657968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.657998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.658317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.658348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.658546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.658564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.658741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.658758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.658965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.658994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.659180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.659209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.659416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.659446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.659653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.659673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.659876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.659893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.660124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.660143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.660260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.660278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.660434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.660452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.660656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.660672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.660848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.660866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.661079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.661108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.661296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.661520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.661550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.661682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.661712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.661838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.661867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.662098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.662128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.662384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.662415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.662676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.662706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.662991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.663009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.663189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.663207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.663304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.663322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.663513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.663530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.663706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.663735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.664019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.664048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.664281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.664312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.664535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.664553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.664768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.103 [2024-07-15 12:59:47.664786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.103 qpair failed and we were unable to recover it. 00:29:56.103 [2024-07-15 12:59:47.664952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.664970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.665071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.665088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.665322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.665341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.665470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.665488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.665648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.665666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.665826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.665844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.666132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.666149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.666262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.666280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.666447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.666465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.666722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.666740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.667014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.667044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.667277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.667306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.667508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.667537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.667792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.667821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.668081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.668111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.668242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.668301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.668437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.668478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.668641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.668658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.668817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.668851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.669053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.669239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.669278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.669475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.669504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.669636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.669665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.669797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.669826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.670078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.670108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.670316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.670347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.670600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.670618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.670785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.670803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.670923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.670940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.671205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.671222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.671402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.671421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.104 qpair failed and we were unable to recover it. 00:29:56.104 [2024-07-15 12:59:47.671514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.104 [2024-07-15 12:59:47.671530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.671696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.671714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.671834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.671852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.672100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.672344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.672551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.672658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.672799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.672996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.673038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.673185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.673214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.673501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.673531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.673660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.673695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.673930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.673948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.674203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.674221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.674466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.674485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.674595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.674612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.674716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.674734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.674966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.674984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.675184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.675201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.675376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.675395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.675559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.675577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.675695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.675713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.675822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.675839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.676953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.677141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.677159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.677347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.677379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.677485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.677515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.677638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.677666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.677857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.677887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.678195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.678224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.678354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.678385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.678519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.678548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.678668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.678696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.678902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.678931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.679121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.105 [2024-07-15 12:59:47.679138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.105 qpair failed and we were unable to recover it. 00:29:56.105 [2024-07-15 12:59:47.679394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.679412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.679576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.679593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.679684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.679700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.679870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.679888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.680074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.680091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.680343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.680369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.680536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.680553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.680647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.680665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.680835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.680852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.681048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.681076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.681273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.681305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.681426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.681456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.681591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.681619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.681817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.681845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.682046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.682064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.682231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.682249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.682519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.682537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.682800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.682817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.683904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.683922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.684035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.684053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.684286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.684305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.684481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.684498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.684678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.684706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.684838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.684867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.685093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.685122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.106 qpair failed and we were unable to recover it. 00:29:56.106 [2024-07-15 12:59:47.685314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.106 [2024-07-15 12:59:47.685345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.685634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.685663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.685961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.685979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.686157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.686296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.686487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.686677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.686873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.686989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.687105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.687225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.687424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.687620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.687801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.688078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.688096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.688210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.688228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.688466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.688485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.688647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.688665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.688937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.688955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.689240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.689266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.689525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.689543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.689652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.689673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.689846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.689863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.690105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.690124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.690299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.690318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.690564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.690594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.690730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.690759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.690890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.690920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.691051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.691080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.691286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.691315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.691504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.691533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.691733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.691751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.691916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.691934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.692125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.692155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.692344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.692373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.692577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.692606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.692814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.692842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.693048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.693078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.693271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.693289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.107 qpair failed and we were unable to recover it. 00:29:56.107 [2024-07-15 12:59:47.693402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.107 [2024-07-15 12:59:47.693419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.693532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.693549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.693741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.693759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.694020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.694217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.694235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.694413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.694455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.694691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.694720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.695002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.695031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.695164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.695193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.695485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.695517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.695646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.695663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.695843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.695861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.696095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.696113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.696424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.696443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.696673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.696691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.696895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.696913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.697033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.697051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.697149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.697166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.697355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5ce60 is same with the state(5) to be set 00:29:56.108 [2024-07-15 12:59:47.697730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.697800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.698068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.698101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.698378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.698398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.698673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.698691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.698964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.698983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.699090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.699108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.699340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.699358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.699460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.699477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.699598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.699615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.699865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.699884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.700011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.700029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.700194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.700228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.700369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.700398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.700534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.700563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.700821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.108 [2024-07-15 12:59:47.700850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.108 qpair failed and we were unable to recover it. 00:29:56.108 [2024-07-15 12:59:47.701133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.701162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.701286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.701317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.701511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.701541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.701751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.701780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.702031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.702048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.702316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.702334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.702535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.702553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.702813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.702831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.702924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.703864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.703893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.704223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.704264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.704539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.704569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.704709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.704727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.704893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.704910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.704993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.705009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.705262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.705281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.705511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.705529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.705764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.705782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.705912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.705931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.706032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.706049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.706304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.706324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.706552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.706570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.706801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.706818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.706995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.707024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.707262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.707294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.109 qpair failed and we were unable to recover it. 00:29:56.109 [2024-07-15 12:59:47.707488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.109 [2024-07-15 12:59:47.707518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.707713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.707731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.707923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.707939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.708899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.708917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.709818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.709836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.710938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.710968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.711105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.711134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.711418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.711449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.711647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.711664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.711848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.711877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.712133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.712167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.712356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.712388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.712609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.712639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.712895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.712912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.713944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.713962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.714091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.714328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.714451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.714636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.110 [2024-07-15 12:59:47.714850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.110 qpair failed and we were unable to recover it. 00:29:56.110 [2024-07-15 12:59:47.714973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.715001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.715262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.715293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.715525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.715554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.715755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.715773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.715968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.715986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.716115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.716294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.716492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.716622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.716818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.716998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.717946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.717964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.718074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.718091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.718290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.718309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.718430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.718449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.718616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.718634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.718742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.718760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.719025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.719043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.719320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.719338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.719465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.719486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.719583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.719601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.719863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.719881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.720145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.720163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.720281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.720300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.720413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.720431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.720710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.720728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.720822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.720838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.721958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.721987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.722185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.722216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.722368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.722399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.722601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.722629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.722914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.722943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.723237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.723277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.723506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.723535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.723722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.723752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.724002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.724019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.724224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.724242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.111 qpair failed and we were unable to recover it. 00:29:56.111 [2024-07-15 12:59:47.724495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.111 [2024-07-15 12:59:47.724513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.724771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.724788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.724907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.724925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.725104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.725121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.725309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.725340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.725621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.725650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.725768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.725797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.726888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.726906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.727000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.727018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.727184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.727202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.727443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.727461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.727690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.727708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.727830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.727851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.728061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.728313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.728518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.728646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.728757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.728997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.729128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.729323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.729599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.729734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.729933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.729951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.730050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.730068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.730228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.730245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.730476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.730495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.730663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.730681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.730906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.730935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.731139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.731168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.731465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.731496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.112 [2024-07-15 12:59:47.731689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.112 [2024-07-15 12:59:47.731718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.112 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.731871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.731900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.732102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.732120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.732223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.732241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.732485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.732503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.732672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.732690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.732967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.732996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.733182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.733518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.733548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.733695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.733724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.733852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.733884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.734926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.734944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.735133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.735362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.735640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.735890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.735985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.736248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.736438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.736699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.736812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.736942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.736960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.737056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.737073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.737341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.737359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.737627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.737645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.737824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.737842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.737961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.737980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.738077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.738095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.738251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.113 [2024-07-15 12:59:47.738281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.113 qpair failed and we were unable to recover it. 00:29:56.113 [2024-07-15 12:59:47.738472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.738491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.738659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.738677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.738863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.738881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.739103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.739121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.739375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.739393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.739489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.739508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.739672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.739690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.739915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.739933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.740025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.740041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.740296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.740314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.740423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.740441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.740617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.740634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.740813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.740830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.741939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.741957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.742235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.742258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.742358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.742376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.742475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.742493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.742724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.742741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.742934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.742951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.743153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.743183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.743321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.743352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.743541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.743570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.743842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.743872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.744151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.744168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.744418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.744436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.744557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.744574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.744803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.744821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.745000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.745018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.745204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.114 [2024-07-15 12:59:47.745222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.114 qpair failed and we were unable to recover it. 00:29:56.114 [2024-07-15 12:59:47.745413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.745444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.745700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.745730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.745931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.745960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.746156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.746187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.746417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.746448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.746645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.746674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.746889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.746919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.747193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.747210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.747433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.747452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.747689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.747706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.747821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.747838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.748013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.748031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.748227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.748263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.748464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.748494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.748680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.748709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.748924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.748942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.749841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.749859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.750040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.750069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.750272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.750302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.750493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.750522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.750652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.750682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.750963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.750992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.751114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.751143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.751420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.751439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.751643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.751661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.751834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.751852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.751957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.751975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.752141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.752159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.752338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.752357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.752590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.115 [2024-07-15 12:59:47.752608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.115 qpair failed and we were unable to recover it. 00:29:56.115 [2024-07-15 12:59:47.752726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.752744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.752902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.752920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.753043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.753298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.753484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.753676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.753792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.753970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.754000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.754229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.754285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.754503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.754532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.754722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.754739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.754992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.755010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.755131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.755149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.755310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.755329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.755501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.755519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.755706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.755724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.755984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.756001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.756251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.756274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.756398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.756415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.756521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.756538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.756739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.756757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.756972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.757001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.757263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.757295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.757553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.757588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.757726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.757755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.757948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.757966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.758218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.758248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.758467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.758496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.758722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.758752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.758890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.758919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.759106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.759274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.759306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.759439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.759457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.759620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.759639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.759808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.759825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.760064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.760082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.116 [2024-07-15 12:59:47.760247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.116 [2024-07-15 12:59:47.760271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.116 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.760368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.760385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.760648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.760666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.760805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.760823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.760933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.760951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.761813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.761988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.762186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.762309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.762511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.762690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.762822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.762840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.763015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.763032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.763225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.763262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.763550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.763579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.763835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.763864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.764917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.764947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.765206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.765241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.765498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.765791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.765821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.766022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.766052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.117 qpair failed and we were unable to recover it. 00:29:56.117 [2024-07-15 12:59:47.766163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.117 [2024-07-15 12:59:47.766193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.766420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.766451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.766658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.766687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.766874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.767036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.767065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.767271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.767302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.767584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.767613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.767809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.767837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.768019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.768037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.768297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.768327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.768545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.768575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.768830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.768859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.769045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.769074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.769268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.769299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.769418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.769448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.769659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.769688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.769879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.769908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.770099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.770117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.770300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.770319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.770596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.770625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.770942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.770972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.771100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.771118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.771310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.771350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.771574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.771603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.771803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.771821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.771933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.771951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.772203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.772220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.772342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.772360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.772534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.772552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.772670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.772881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.772899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.773094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.773112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.773341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.773359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.773523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.773541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.773741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.773759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.773872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.773890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.118 [2024-07-15 12:59:47.774068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.118 [2024-07-15 12:59:47.774089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.118 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.774279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.774298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.774461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.774479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.774680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.774709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.774844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.774873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.775092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.775121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.775313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.775343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.775474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.775504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.775726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.775755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.775904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.775933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.776191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.776220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.776373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.776403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.776595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.776624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.776905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.776922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.777161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.777179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.777413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.777431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.777637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.777655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.777775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.777793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.778022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.778040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.778340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.778358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.778468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.778485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.778653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.778671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.778860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.778877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.779058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.779076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.779234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.779252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.779421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.779463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.779666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.779695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.779846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.779876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.780131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.780299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.780448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.780613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.780853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.780990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.781019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.781154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.781183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.781331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.781361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.781559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.781588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.781843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.781872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.782081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.119 [2024-07-15 12:59:47.782099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.119 qpair failed and we were unable to recover it. 00:29:56.119 [2024-07-15 12:59:47.782194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.782211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.782378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.782400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.782591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.782630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.782796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.782813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.783863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.783881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.784062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.784080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.784248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.784271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.784411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.784429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.784663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.784693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.784895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.784924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.785922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.785939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.786905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.786922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.787086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.787104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.787361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.787379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.787633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.787650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.787856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.788972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.788990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.789177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.789207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.120 qpair failed and we were unable to recover it. 00:29:56.120 [2024-07-15 12:59:47.789414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.120 [2024-07-15 12:59:47.789444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.789648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.789677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.789899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.789920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.790113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.790131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.790225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.790242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.790457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.790475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.790703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.790722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.790886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.790903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.791155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.791337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.791355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.791560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.791578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.791756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.791774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.792035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.792052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.792235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.792253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.792391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.792409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.792668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.792686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.792867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.792885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.793077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.793095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.793213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.793230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.793472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.793490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.793720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.793738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.793928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.793946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.794126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.794143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.794348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.794366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.794652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.794670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.794782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.794799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.794915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.794933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.795211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.795229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.795530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.795561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.795823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.795853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.795990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.796008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.796274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.796305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.796448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.796477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.796669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.796698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.796901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.796931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.797070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.797099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.797219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.797237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.797421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.797439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.121 [2024-07-15 12:59:47.797622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.121 [2024-07-15 12:59:47.797640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.121 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.797819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.797837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.797992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.798010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.798192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.798209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.798451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.798473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.798647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.798665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.798831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.798873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.799067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.799097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.799291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.799321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.799571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.799589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.799687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.799705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.799968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.799985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.800215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.800233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.800333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.800350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.800541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.800559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.800665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.800683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.800851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.800868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.801926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.801956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.802243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.802292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.802441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.802471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.802666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.802695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.803006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.803035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.803250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.803290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.803545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.803575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.803723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.803753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.804007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.804037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.804248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.804289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.804505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.804793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.804822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.805029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.805046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.122 qpair failed and we were unable to recover it. 00:29:56.122 [2024-07-15 12:59:47.805213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.122 [2024-07-15 12:59:47.805230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.805418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.805437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.805615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.805633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.805798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.805816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.805939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.805957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.806185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.806203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.806305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.806322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.806441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.806458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.806654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.806672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.806844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.806865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.807047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.807086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.807275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.807305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.807446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.807475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.807707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.807736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.807854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.807884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.808068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.808097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.808382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.808400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.808521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.808539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.808803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.808821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.808994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.809011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.809246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.809282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.809419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.809449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.809782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.809811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.810011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.810041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.810168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.810197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.810406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.810437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.810637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.810655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.810919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.810937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.811878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.811990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.812007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.812265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.812284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.812491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.812510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.812784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.812802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.812974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.812992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.813155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.813173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.813359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.123 [2024-07-15 12:59:47.813389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-07-15 12:59:47.813595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.813624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.813813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.813976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.814004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.814159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.814188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.814457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.814487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.814740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.814758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.814922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.814940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.815117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.815146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.815428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.815464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.815695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.815724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.815911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.816216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.816245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.816560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.816590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.816724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.816973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.817003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.817198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.817226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.817374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.817409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.817569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.817588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.817822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.817840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.817997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.818176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.818438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.818578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.818770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.818959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.818976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.819198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.819216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.819389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.819407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.819621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.819650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.819796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.819834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.820013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.820031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.820224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.820252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.820489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.820520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.820752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.820781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.820967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.820996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.821277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.821307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.821477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.821545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.821828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.821860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.822145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.822174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.822446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.822478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.822599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.822628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.822913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.822942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.823143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-15 12:59:47.823172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-07-15 12:59:47.823300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.823332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.823557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.823586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.823771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.823801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.824025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.824054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.824252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.824291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.824504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.824524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.824711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.824732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.824908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.824926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.825129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.825159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.825346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.825376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.825591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.825620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.825837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.825865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.826066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.826094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.826379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.826411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.826623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.826652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.826792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.826820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.827024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.827054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.827171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.827188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.827387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.827404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.827605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.827635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.827898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.827928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.828182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.828210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.828506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.828686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.828845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.828862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.828967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.828984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.829212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.829229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.829433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.829451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.829620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.829650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.829905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.829934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.830142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.830171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.830376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.830407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.830666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.830695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.830942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.831012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.831246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.831296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.831441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.831472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.831746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.831779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.831976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.832005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.832304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.832334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.832535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.832566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.832756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.832784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.832989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.833019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.833149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-07-15 12:59:47.833307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-15 12:59:47.833337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.833530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.833560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.833787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.833817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.834833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.834851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.835010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.835027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.835191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.835208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.835370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.835389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.835571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.835588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.835872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.835901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.836192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.836221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.836360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.836390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.836517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.836547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.836779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.837038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.837276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.837306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.837443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.837473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.837777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.837806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.837954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.837983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.838242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.838294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.838433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.838462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.838653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.838683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.838887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.838916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.839055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.839084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.839200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.839230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.839373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.839407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.839635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.839674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.839861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.839891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.840175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.840205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.840504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.840535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.840733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.840762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.840891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.840910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.841146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.841175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.841298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-15 12:59:47.841328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.126 qpair failed and we were unable to recover it. 00:29:56.126 [2024-07-15 12:59:47.841554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.841583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.841781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.841810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.842041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.842070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.842274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.842304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.842590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.842619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.842878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.842907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.843081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.843098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.843391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.843422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.843605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.843635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.843823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.844125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.844155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.844438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.844468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.844751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.844780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.844911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.844940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.845212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.845240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.845532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.845561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.845760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.845788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.845993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.846023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.846213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.846242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.846531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.846561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.846817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.846846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.847041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.847070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.847201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.847218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.847476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.847506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.847717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.847746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.847947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.847977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.848178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.848196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.848360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.848379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.848533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.848573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.848848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.848877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.849074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.849114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.849223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.849241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.849480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.849502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.849655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.849673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.849860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.850100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.850117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.850372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.850390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.850511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.850528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.850706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.850725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.850964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.850981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.851145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.851163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.851338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.851367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.851620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.851650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.851786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.851815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.127 qpair failed and we were unable to recover it. 00:29:56.127 [2024-07-15 12:59:47.852004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-15 12:59:47.852033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.852234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.852273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.852476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.852506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.852788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.852818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.853029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.853058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.853269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.853288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.853472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.853489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.853719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.853736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.853919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.853958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.854239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.854543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.854572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.854697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.854727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.854931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.854960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.855218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.855247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.855490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.855507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.855674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.855692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.855854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.855871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.856959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.856976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.857164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.857180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.857342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.857361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.857554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.857583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.857718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.857748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.857934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.857962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.858166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.858200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.858428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.858458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.858651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.858679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.858947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.858976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.859231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.859266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.859450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.859468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.859665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.859829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.859846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.860052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.860080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.860200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.860228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.860379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.860410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.860621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.860651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.860836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.860865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.861091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.861119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.861318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.861348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.861552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.128 [2024-07-15 12:59:47.861582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.128 qpair failed and we were unable to recover it. 00:29:56.128 [2024-07-15 12:59:47.861728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.861759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.861982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.862105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.862313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.862494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.862698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.862913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.863250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.863288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.863490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.863519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.863657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.863686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.863889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.863917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.864209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.864237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.864509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.864539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.864750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.864779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.865034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.865064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.865271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.865302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.865500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.865528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.865659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.865687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.865818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.865847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.866105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.866135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.866355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.866386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.866567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.866596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.866853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.866882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.867161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.867191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.867383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.867404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.867575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.867593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.867774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.867802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.868006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.868034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.868317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.868348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.868542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.868571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.868762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.868791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.868996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.869025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.869212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.869241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.869453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.869482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.869707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.869725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.869957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.869975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.870073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.870092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.870242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.870276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.870510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.870540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.870748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.870778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.871058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.871088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.871204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.871233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.871558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.871576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.871668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.871685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.871862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.129 [2024-07-15 12:59:47.871880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.129 qpair failed and we were unable to recover it. 00:29:56.129 [2024-07-15 12:59:47.872079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.872108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.872321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.872351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.872543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.872572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.872781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.872810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.872977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.873005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.873193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.873222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.873534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.873566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.873749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.873779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.873967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.873995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.874201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.874219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.874460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.874490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.874684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.874714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.874838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.874867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.875921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.875940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.876058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.876080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.876184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.876201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.876448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.876467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.876634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.876883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.876901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.877966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.877984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.878204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.878234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.878420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.878449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.878638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.878862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.878893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.879153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.879170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.879382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.879400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.879599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.879616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.879888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.879906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.880074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.880092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.880210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.130 [2024-07-15 12:59:47.880229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.130 qpair failed and we were unable to recover it. 00:29:56.130 [2024-07-15 12:59:47.880410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.880428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.880617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.880646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.880904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.880933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.881084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.881113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.881329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.881348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.881527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.881556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.881765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.881796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.881934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.881962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.882189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.882219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.882453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.882484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.882690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.882719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.882915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.882945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.883086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.883280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.883489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.883801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.883992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.884158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.884345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.884459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.884655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.884835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.884852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.885036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.885184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.885454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.885586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.885696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.885979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.886008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.886271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.886301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.886487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.886505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.886682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.886700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.886965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.886983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.887100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.887118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.887281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.887299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.887555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.887584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.887732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.887761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.887984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.888013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.888266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.888284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.888446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.888463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.888668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.888685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.888874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.888893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.889152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.889180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.889483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.889514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.889660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.889690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.889886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.131 [2024-07-15 12:59:47.889916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.131 qpair failed and we were unable to recover it. 00:29:56.131 [2024-07-15 12:59:47.890204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.890234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.890449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.890466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.890560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.890578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.890829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.890846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.891036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.891053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.891220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.891239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.891477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.891507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.891787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.891816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.892092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.892236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.892273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.892499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.892528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.892781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.892809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.893094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.893124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.893320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.893343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.893454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.893471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.893656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.893834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.893851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.894113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.894142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.894361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.894392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.894595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.894625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.894775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.894804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.894939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.894956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.895131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.895171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.895461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.895492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.895619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.895650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.895906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.895935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.896121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.896151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.896353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.896371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.896494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.896511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.896682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.896700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.896878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.896896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.897096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.897114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.897277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.897296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.897481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.897511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.897629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.897658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.897889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.897919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.898064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.898093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.898345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.898363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.898563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.898580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.898848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.898866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.899882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.899899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.132 [2024-07-15 12:59:47.900129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.132 [2024-07-15 12:59:47.900148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.132 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.900271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.900516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.900534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.900726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.900744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.900961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.900979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.901184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.901212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.901506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.901537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.901797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.901832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.901978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.901995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.902086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.902104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.902305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.902323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.902577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.902594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.902826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.902843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.902965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.902982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.903105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.903124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.903294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.903313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.903415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.903432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.903611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.903630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.903871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.903889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.904085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.904102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.904207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.904224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.904341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.904360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.904624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.904642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.904902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.904920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.905085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.905102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.905274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.905292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.905555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.905583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.905786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.905815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.906002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.906032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.906239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.906278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.906532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.906561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.906844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.906873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.907162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.907191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.907305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.907323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.907544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.907561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.907757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.907775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.908034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.908052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.908212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.908229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.908423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.908454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.908598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.908628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.908853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.908882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.909069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.909099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.133 [2024-07-15 12:59:47.909289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.133 [2024-07-15 12:59:47.909318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.133 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.909464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.909493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.909798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.909816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.910066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.910084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.910265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.910284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.910491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.910520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.910727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.910756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.910955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.910984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.911280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.911300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.911534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.911552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.911784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.911802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.911901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.911918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.912104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.912122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.912296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.912315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.912575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.912604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.912835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.912865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.912997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.913241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.913475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.913705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.913815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.913922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.913939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.914170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.914188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.914352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.914371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.914493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.914511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.914676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.914693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.914864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.914882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.915062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.915080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.915246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.915283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.915593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.915622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.915809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.915838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.916033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.916062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.916237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.916263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.916510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.916540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.916741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.916771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.916969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.916998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.917245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.917269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.917475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.917493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.917653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.917671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.917773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.917790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.918062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.918080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.918180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.918198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.918385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.918428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.918578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.918606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.918861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.918891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.919014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.919042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.919176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.919207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.134 qpair failed and we were unable to recover it. 00:29:56.134 [2024-07-15 12:59:47.919409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.134 [2024-07-15 12:59:47.919428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.919627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.919645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.919901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.919920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.920149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.920424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.920442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.920598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.920616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.920844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.920874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.921103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.921345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.921553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.921679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.921878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.921995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.922113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.922329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.922515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.922707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.922904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.922921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.923153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.923171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.923436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.923623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.923640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.923826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.923855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.923982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.924010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.924273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.924304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.924588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.924606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.924777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.924797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.924975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.924993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.925094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.925112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.925349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.925367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.925609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.925627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.925889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.925907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.926067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.926084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.926200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.926217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.926407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.926426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.926659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.926676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.926778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.926795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.927027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.927235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.927253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.927445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.927474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.927699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.927728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.927958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.927987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.928196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.928214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.928405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.928423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.928598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.928615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.928776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.928794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.928918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.928935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.929104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.929121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.929402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.135 [2024-07-15 12:59:47.929421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.135 qpair failed and we were unable to recover it. 00:29:56.135 [2024-07-15 12:59:47.929531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.929550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.929654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.929671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.929837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.929855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.930898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.930915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.931071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.931089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.931318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.931337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.931503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.931520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.931627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.931645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.931813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.931831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.932004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.932032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.932321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.932351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.932527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.932556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.932783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.932817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.933096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.933126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.933380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.933399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.933510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.933527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.933759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.933777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.934031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.934048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.934306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.934325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.934452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.934470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.934633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.934651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.934884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.934901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.935013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.935030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.935235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.935259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.935426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.935444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.935619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.935636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.935901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.935930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.936070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.936098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.936284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.936315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.936500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.936529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.936667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.936697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.936966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.937200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.937228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.937528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.937546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.937888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.937917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.938127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.938156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.938459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.938489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.938703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.938720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.938982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.939161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.939178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.939432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.939451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.939630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.939648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.939842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.939860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.940095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.136 [2024-07-15 12:59:47.940125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.136 qpair failed and we were unable to recover it. 00:29:56.136 [2024-07-15 12:59:47.940330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.940360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.940572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.940600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.940833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.940862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.941121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.941151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.941339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.941368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.941564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.941581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.941839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.941856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.941968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.941985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.942163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.942184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.942443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.942483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.942629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.942658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.942855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.942884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.943001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.943031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.943223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.943252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.943407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.943436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.943623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.943640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.943752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.944011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.944029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.944285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.944304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.944485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.944503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.944711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.944740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.945024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.945054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.945276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.945306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.945569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.945598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.945724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.945752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.945940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.945968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.946178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.946208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.946507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.946525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.946700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.946718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.946842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.946876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.947081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.947110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.947277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.947307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.947506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.947534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.947817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.947847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.948035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.948065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.948271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.948303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.948509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.948538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.948762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.948792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.949077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.949106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.949335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.949365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.949519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.949549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.949769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.949798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.949938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.949968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.950164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.950193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.950390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.137 [2024-07-15 12:59:47.950419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.137 qpair failed and we were unable to recover it. 00:29:56.137 [2024-07-15 12:59:47.950636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.950665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.950919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.950948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.951088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.951116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.951325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.951360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.951646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.951674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.951873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.951903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.952109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.952138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.952490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.952520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.952653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.952682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.952959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.952987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.953175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.953204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.953500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.953530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.953720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.953749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.953951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.953981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.954206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.954235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.954529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.954559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.954745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.954774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.954987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.955016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.955159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.955176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.955348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.955366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.955612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.955630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.955875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.955893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.956100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.956118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.956323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.956342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.956502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.956540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.956763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.956793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.956977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.957194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.957395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.957572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.957742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.957969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.957997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.958287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.958317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.958573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.958603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.958833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.958862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.959052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.959081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.959277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.959295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.959546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.959564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.959754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.959771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.959940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.959958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.960055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.960073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.960322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.960340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.960534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.960552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.960723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.960743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.960907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.960925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.961124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.961385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.961415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.961555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.961572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.961758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.961800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.962087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.962115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 12:59:47.962343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 12:59:47.962374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.962604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.962621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.962857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.962874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.963040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.963058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.963277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.963295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.963468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.963507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.963702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.963730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.963936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.963965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.964246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.964283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.964416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.964433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.964595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.964613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.964815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.964832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.965957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.965985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.966222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.966251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.966407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.966425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.966695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.966724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.966945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.966974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.967205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.967235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.967465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.967495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.967625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.967642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 12:59:47.967866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 12:59:47.967884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.968046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.968065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.968243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.968268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.968559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.968589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.968726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.968755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.968945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.968974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.969109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.969137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.969327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.969358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.969560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.969582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.969744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.969762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.969862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.969879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.970086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.970104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.970223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.970240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.970462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.970480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.970596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.970614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.970847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.970864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.458 qpair failed and we were unable to recover it. 00:29:56.458 [2024-07-15 12:59:47.971027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.458 [2024-07-15 12:59:47.971045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.971236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.971253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.971443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.971462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.971570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.971588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.971793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.971922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.971939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.972146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.972165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.972372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.972390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.972559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.972679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.972696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.972931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.972949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.973040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.973058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.973299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.973329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.973620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.973649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.973851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.973880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.974068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.974097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.974285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.974315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.974445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.974474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.974681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.974697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.974917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.974936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.975947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.975975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.976204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.976246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.976470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.976488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.976618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.976636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.976757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.976773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.976936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.976965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.977229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.977269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.977504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.977539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.977778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.977807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.977950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.977979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.978277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.978308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.978512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.978541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.978679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.978708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.978969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.978998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.979206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.459 [2024-07-15 12:59:47.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.459 qpair failed and we were unable to recover it. 00:29:56.459 [2024-07-15 12:59:47.979430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.979459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.979628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.979658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.979964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.979993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.980205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.980234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.980539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.980569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.980754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.980772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.980934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.980952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.981116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.981154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.981354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.981384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.981521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.981550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.981761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.981779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.981941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.981958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.982225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.982244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.982429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.982447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.982645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.982663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.982846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.982864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.983062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.983557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.983681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.983902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.983993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.984010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.984206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.984236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.984446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.984475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.984609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.984639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.984845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.984875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.985060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.985089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.985372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.985583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.985600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.985769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.985785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.986020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.986050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.986247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.986284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.986440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.986475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.986676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.986938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.986967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.987223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.987253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.987422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.987439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.987618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.987635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.987798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.987816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.988000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.988018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.460 qpair failed and we were unable to recover it. 00:29:56.460 [2024-07-15 12:59:47.988112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.460 [2024-07-15 12:59:47.988129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.988265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.988284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.988410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.988427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.988617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.988635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.988879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.988898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.989082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.989100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.989306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.989337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.989541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.989570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.989759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.989788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.989990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.990020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.990207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.990235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.990529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.990567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.990781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.990799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.990986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.991004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.991122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.991140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.991314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.991332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.991590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.991620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.991909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.991938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.992145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.992174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.992310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.992356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.992472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.992490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.992662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.992681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.992969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.992998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.993132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.993163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.993291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.993324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.993563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.993580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.993758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.993775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.993991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.994204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.994403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.994583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.994763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.994951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.994971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.995137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.995166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.995448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.995479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.995682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.995711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.995895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.995923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.996108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.996137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.996274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.996303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.996456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.996474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.996646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.461 [2024-07-15 12:59:47.996663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.461 qpair failed and we were unable to recover it. 00:29:56.461 [2024-07-15 12:59:47.996834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.996852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.997029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.997047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.997278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.997297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.997460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.997656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.997673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.997860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.997878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.998135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.998348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.998379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.998513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.998543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.998830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.998848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.999011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.999029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.999283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.999302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.999482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.999511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.999666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.999694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:47.999893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:47.999922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.000045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.000074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.000297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.000328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.000619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.000648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.000923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.000993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.001223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.001280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.001401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.001433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.001563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.001583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.001703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.001721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.001927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.001944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.002857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.002874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.003117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.003135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.003378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.003400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.003651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.003669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.003833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.462 [2024-07-15 12:59:48.003852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.462 qpair failed and we were unable to recover it. 00:29:56.462 [2024-07-15 12:59:48.003957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.003974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.004229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.004247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.004354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.004372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.004556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.004574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.004805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.004823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.004997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.005014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.005195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.005213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.005446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.005477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.005610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.005640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.005826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.005855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.006036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.006065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.006281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.006301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.006482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.006511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.006707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.006737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.006937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.006967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.007178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.007208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.007341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.007370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.007559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.007588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.007794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.007823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.008085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.008115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.008262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.008292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.008484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.008514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.008701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.008730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.008943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.008973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.009227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.009272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.009456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.009474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.009587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.009604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.009858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.009877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.010010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.010027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.010200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.010237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.010408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.010437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.010746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.010776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.010987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.011016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.011208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.011238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.011559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.011578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.011809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.011827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.011950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.011967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.012171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.012210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.012483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.012512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.012661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.012690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.463 [2024-07-15 12:59:48.012917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.463 [2024-07-15 12:59:48.012947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.463 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.013228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.013266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.013471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.013502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.013734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.013751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.013958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.013975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.014181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.014199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.014431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.014450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.014611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.014629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.014816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.014845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.015051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.015079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.015272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.015291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.015557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.015587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.015839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.015868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.016125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.016154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.016282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.016313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.016567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.016596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.016875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.016904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.017127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.017157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.017284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.017302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.017527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.017545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.017633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.017650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.017852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.017870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.018124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.018141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.018349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.018368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.018621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.018655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.018968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.018997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.019251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.019289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.019563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.019581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.019838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.019856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.020826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.020844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.021037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.021067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.021321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.021351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.021482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.021513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.021731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.021748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.021917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.021934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.022116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.022134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.464 [2024-07-15 12:59:48.022297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.464 [2024-07-15 12:59:48.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.464 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.022480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.022497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.022761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.022779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.022888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.022906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.023083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.023101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.023266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.023285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.023480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.023498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.023600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.023617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.023878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.023896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.024126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.024143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.024402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.024420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.024594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.024612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.024714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.024732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.024908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.024926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.025113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.025143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.025279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.025309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.025500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.025528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.025723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.025741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.025942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.025960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.026139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.026157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.026333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.026351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.026544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.026561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.026776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.026794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.026989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.027009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.027272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.027301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.027554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.027792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.027822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.027967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.027996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.028250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.028288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.028572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.028601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.028820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.028850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.029131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.029160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.029286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.029316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.029521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.029551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.029736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.029753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.029842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.029860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.030090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.030108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.030285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.030304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.030542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.030560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.030798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.030816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.030978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.030995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.465 [2024-07-15 12:59:48.031259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.465 [2024-07-15 12:59:48.031278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.465 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.031408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.031439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.031666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.031696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.031908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.031938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.032143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.032172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.032358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.032388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.032529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.032557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.032838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.032855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.032974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.032991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.033157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.033175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.033355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.033373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.033489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.033506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.033767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.033786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.033965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.033982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.034169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.034199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.034344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.034374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.034650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.034680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.034893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.034912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.035083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.035100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.035343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.035374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.035576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.035605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.035794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.035823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.036095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.036130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.036392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.036422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.036621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.036639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.036896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.036914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.037005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.037023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.037220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.037237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.037400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.037417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.037658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.037677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.037849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.037867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.038050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.038068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.038249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.038448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.038465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.038645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.038675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.038801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.038830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.039064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.039094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.039291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.039322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.039520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.039550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.039748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.039776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.040029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.040057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.466 [2024-07-15 12:59:48.040326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.466 [2024-07-15 12:59:48.040356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.466 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.040477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.040494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.040703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.040721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.040880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.041083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.041118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.041333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.041364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.041637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.041666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.041924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.042151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.042168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.042345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.042363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.042581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.042598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.042833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.042868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.043124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.043153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.043415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.043445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.043648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.043677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.043930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.043948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.044180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.044198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.044363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.044381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.044646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.044676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.044863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.044892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.045097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.045127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.045327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.045367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.045510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.045541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.045681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.045698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.045862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.045880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.046044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.046062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.046245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.046301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.046454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.046483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.046664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.046695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.046796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.046826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.047109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.047139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.047448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.047479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.047627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.047657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.047805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.047822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.047993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.048010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.467 [2024-07-15 12:59:48.048246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.467 [2024-07-15 12:59:48.048270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.467 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.048498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.048516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.048723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.048741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.048852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.048870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.049044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.049061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.049258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.049276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.049470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.049513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.049647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.049676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.049934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.049963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.050175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.050204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.050440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.050472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.050659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.050675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.050836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.050853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.050961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.050980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.051269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.051288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.051526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.051653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.051670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.051850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.051868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.052051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.052068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.052313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.052332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.052498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.052527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.052789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.052818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.053870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.053887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.054060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.054078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.054259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.054277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.054441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.054457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.054732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.054761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.054962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.054991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.055275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.055305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.055438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.055467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.055600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.055629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.055816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.055834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.055944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.055962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.056056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.056074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.056309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.056328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.056594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.056623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.468 [2024-07-15 12:59:48.056828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.468 [2024-07-15 12:59:48.056858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.468 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.057069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.057099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.057233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.057270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.057407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.057436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.057695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.057723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.057920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.057937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.058044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.058062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.058228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.058246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.058378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.058396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.058660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.058678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.058878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.058896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.059126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.059144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.059333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.059364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.059505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.059534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.059785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.059814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.060026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.060055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.060278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.060307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.060592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.060622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.060898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.060928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.061189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.061219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.061452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.061482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.061767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.061797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.062061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.062090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.062345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.062375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.062530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.062560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.062755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.062936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.062965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.063221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.063250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.063467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.063611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.063640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.063783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.063801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.063976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.063994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.064167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.064184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.064282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.064301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.064473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.064503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.064707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.064737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.064891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.064923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.065063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.065091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.065279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.065310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.065503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.065533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.065737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.065766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.469 [2024-07-15 12:59:48.065969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.469 [2024-07-15 12:59:48.065999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.469 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.066227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.066273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.066489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.066507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.066765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.066783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.067012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.067029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.067296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.067314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.067423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.067440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.067677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.067695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.067886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.067904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.068010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.068028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.068183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.068200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.068306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.068326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.068582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.068600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.068772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.068790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.069050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.069079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.069381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.069411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.069663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.069681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.069854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.069871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.069980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.069998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.070088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.070106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.070284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.070302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.070465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.070482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.070679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.070708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.070931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.070960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.071166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.071200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.071394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.071425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.071675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.071693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.071873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.071891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.072096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.072113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.072295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.072313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.072583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.072612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.072802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.072832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.073139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.073168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.073447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.073477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.073698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.073727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.470 [2024-07-15 12:59:48.073887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.470 [2024-07-15 12:59:48.073916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.470 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.074918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.075192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.075210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.075404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.075594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.075613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.075799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.075816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.076035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.076054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.076182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.076200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.076458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.076489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.076722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.076752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.076952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.077174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.077204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.077527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.077558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.077817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.077835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.078034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.078052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.078182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.078200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.078458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.078489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.078772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.078802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.079079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.079109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.079310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.079340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.079478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.079496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.079755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.079785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.080048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.080077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.080375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.080538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.080572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.080858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.080887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.081074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.081104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.081335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.081365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.081553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.081582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.081849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.471 [2024-07-15 12:59:48.081878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.471 qpair failed and we were unable to recover it. 00:29:56.471 [2024-07-15 12:59:48.082098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.082333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.082364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.082536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.082604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.082764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.082797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.082938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.082969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.083160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.083191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.083406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.083437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.083714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.083743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.083963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.083992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.084199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.084228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.084419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.084437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.084597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.084614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.084755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.084772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.085002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.085020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.085202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.085219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.085479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.085498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.085670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.085688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.085863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.085892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.086030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.086059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.086288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.086321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.086510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.086538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.086805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.086876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.087159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.087192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.087473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.087506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.087776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.087806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.088047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.088077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.088211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.088240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.088550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.088570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.088806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.088823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.088932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.088949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.089933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.089951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.090104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.090133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.472 qpair failed and we were unable to recover it. 00:29:56.472 [2024-07-15 12:59:48.090427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.472 [2024-07-15 12:59:48.090458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.090659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.090689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.090825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.090855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.090992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.091021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.091319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.091349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.091479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.091507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.091800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.091818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.092053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.092070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.092178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.092195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.092360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.092378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.092579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.092597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.092777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.092796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.093103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.093132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.093276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.093307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.093528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.093558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.093747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.093776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.094000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.094030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.094293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.094324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.094538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.094568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.094824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.094854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.094985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.095015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.095304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.095335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.095480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.095508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.095714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.095743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.095937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.095970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.096106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.096135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.096342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.096388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.096598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.096805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.096834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.097039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.097068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.097293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.097322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.097644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.097673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.097949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.097967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.098143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.098161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.098276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.098294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.098485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.098503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.098740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.098758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.098989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.099007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.099188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.099206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.099444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.099463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.099746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.099763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.473 qpair failed and we were unable to recover it. 00:29:56.473 [2024-07-15 12:59:48.099999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.473 [2024-07-15 12:59:48.100017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.100248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.100271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.100381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.100399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.100655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.100673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.100870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.100887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.101120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.101138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.101393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.101411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.101647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.101665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.101917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.101934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.102110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.102127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.102379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.102398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.102655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.102684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.102870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.102900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.103014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.103043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.103280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.103311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.103440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.103458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.103698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.103727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.103937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.103967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.104222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.104251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.104392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.104422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.104605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.104634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.104819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.104848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.104988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.105017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.105309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.105344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.105605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.105635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.105769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.105798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.106802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.106999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.107142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.107321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.107568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.107778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.107943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.107973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.474 [2024-07-15 12:59:48.108182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.474 [2024-07-15 12:59:48.108211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.474 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.108414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.108633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.108663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.108922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.108952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.109098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.109128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.109319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.109349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.109501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.109531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.109717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.109745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.109949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.109978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.110272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.110303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.110601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.110630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.110771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.110800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.111009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.111027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.111234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.111272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.111556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.111585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.111787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.111816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.112034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.112064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.112270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.112300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.112440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.112470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.112695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.112713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.112943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.112961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.113219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.113237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.113419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.113437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.113619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.113637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.113749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.113767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.113979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.113999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.114232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.114249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.114433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.114451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.114708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.114726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.114932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.114949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.115194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.115211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.115395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.115414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.115546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.115578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.115783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.115812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.115958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.115987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.116116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.116146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.116275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.116527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.116556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.475 [2024-07-15 12:59:48.116751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.475 [2024-07-15 12:59:48.116781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.475 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.116986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.117016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.117216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.117245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.117444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.117474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.117619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.117651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.117831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.117849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.118049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.118078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.118272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.118302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.118420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.118450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.118706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.118882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.118911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.119104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.119122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.119325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.119356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.119491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.119520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.119805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.119835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.120834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.121042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.121239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.121382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.121633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.121748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.121984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.122015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.122167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.122202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.122417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.122447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.122636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.122654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.122936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.122953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.123143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.123161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.123342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.123361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.123562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.476 qpair failed and we were unable to recover it. 00:29:56.476 [2024-07-15 12:59:48.123746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.476 [2024-07-15 12:59:48.123763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.123930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.123948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.124133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.124163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.124384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.124415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.124553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.124582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.124859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.124877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.125967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.125997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.126197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.126226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.126426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.126457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.126641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.126659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.126771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.126788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.126964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.126982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.127162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.127191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.127340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.127371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.127529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.127558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.127750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.127780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.128839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.128856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.129058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.129076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.129309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.129327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.129521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.129538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.129728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.129746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.129927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.129944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.130115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.130143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.130349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.130385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.130712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.130927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.130944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.131126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.131143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.131379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.131410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.131550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.477 [2024-07-15 12:59:48.131580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.477 qpair failed and we were unable to recover it. 00:29:56.477 [2024-07-15 12:59:48.131785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.131814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.132067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.132084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.132344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.132362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.132654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.132672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.132772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.132790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.133048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.133066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.133252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.133275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.133390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.133408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.133672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.133690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.133800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.133818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.134069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.134260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.134394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.134608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.134803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.134974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.135124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.135302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.135459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.135609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.135786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.136821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.136841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.137916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.137933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.138953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.138971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.139160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.139178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.139382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.478 [2024-07-15 12:59:48.139401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.478 qpair failed and we were unable to recover it. 00:29:56.478 [2024-07-15 12:59:48.139579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.139597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.139690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.139708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.139888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.139906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.140109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.140126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.140293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.140311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.140542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.140560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.140724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.140742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.140980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.141139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.141373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.141533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.141765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.141940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.141969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.142100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.142118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.142295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.142313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.142477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.142495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.142737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.142766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.142918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.142946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.143150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.143179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.143388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.143419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.143734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.143751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.143942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.143960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.144167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.144185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.144358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.144376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.144585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.144614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.144869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.144898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.145038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.145067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.145350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.145380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.145574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.145603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.145713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.145742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.145860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.146911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.146927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.147104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.147121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.147232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.147250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.147448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.147484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.147685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.479 [2024-07-15 12:59:48.147715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.479 qpair failed and we were unable to recover it. 00:29:56.479 [2024-07-15 12:59:48.147844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.147873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.148026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.148044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.148304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.148335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.148538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.148567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.148692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.148721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.148911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.148940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.149215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.149245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.149373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.149403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.149688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.149717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.149844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.149874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.150916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.150934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.151031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.151049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.151238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.151271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.151426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.151444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.151677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.151707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.151845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.151875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.152001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.152031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.152245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.152285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.152442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.152471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.152752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.152782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.152969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.152998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.153278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.153308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.153449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.153478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.153601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.153630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.153838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.153855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.154065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.154083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.154251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.154274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.154443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.154464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.154747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.154765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.154940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.480 [2024-07-15 12:59:48.154958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.480 qpair failed and we were unable to recover it. 00:29:56.480 [2024-07-15 12:59:48.155062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.155079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.155252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.155291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.155453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.155470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.155727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.155745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.156003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.156020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.156142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.156160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.156334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.156353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.156538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.156567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.156766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.156795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.157013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.157041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.157182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.157211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.157439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.157469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.157726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.157756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.158025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.158043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.158221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.158238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.158498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.158516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.158742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.158759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.158866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.158883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.159076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.159340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.159512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.159703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.159882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.159994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.160012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.160211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.160229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.160424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.160454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.160718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.160746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.161028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.161199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.161228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.161453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.161483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.161738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.161767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.161903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.161932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.162155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.162185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.162386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.162417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.162634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.162663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.162946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.162975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.163161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.163179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.163293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.163314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.163411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.163429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.163536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.163554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.481 [2024-07-15 12:59:48.163715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.481 [2024-07-15 12:59:48.163733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.481 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.163896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.163914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.164160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.164178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.164278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.164295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.164552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.164570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.164683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.164701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.164861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.164878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.165851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.165869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.166938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.167973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.167990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.168099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.168117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.168399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.168418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.168686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.168715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.168856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.168885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.169937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.169955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.170074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.170091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.170271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.170289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.170495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.170516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.170689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.170718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.170907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.170936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.482 qpair failed and we were unable to recover it. 00:29:56.482 [2024-07-15 12:59:48.171153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.482 [2024-07-15 12:59:48.171182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.171320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.171350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.171564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.171593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.171852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.171869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.171983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.172000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.172157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.172174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.172376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.172406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.172598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.172627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.172963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.172992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.173274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.173304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.173494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.173524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.173672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.173702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.173889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.173922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.174203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.174221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.174421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.174440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.174631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.174648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.174828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.174857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.175082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.175111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.175422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.175453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.175591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.175621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.175754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.175771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.175935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.175952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.176155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.176185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.176315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.176345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.176616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.176646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.176789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.176986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.177023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.177309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.177340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.177479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.177508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.177766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.177795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.177997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.178026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.178237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.178288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.178407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.178437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.178636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.178665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.178805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.178834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.179116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.179145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.179332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.179362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.179498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.179533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.179728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.179757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.179895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.179924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.180132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.180162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.483 qpair failed and we were unable to recover it. 00:29:56.483 [2024-07-15 12:59:48.180291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.483 [2024-07-15 12:59:48.180309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.180553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.180571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.180682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.180700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.180875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.180893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.181153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.181171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.181400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.181419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.181585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.181603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.181723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.181741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.181960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.181978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.182956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.183135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.183153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.183326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.183345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.183513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.183531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.183636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.183653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.183826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.183843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.184923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.184941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.185044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.185062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.185266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.185284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.185443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.185460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.185627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.185645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.185865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.185894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.186110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.186139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.186267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.186297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.484 [2024-07-15 12:59:48.186505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.484 [2024-07-15 12:59:48.186534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.484 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.186755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.186784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.186906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.186926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.187084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.187102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.187298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.187329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.187611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.187640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.187765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.187795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.187998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.188244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.188442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.188639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.188821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.188926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.188943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.189191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.189373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.189508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.189692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.189825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.189994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.190113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.190334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.190452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.190639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.190766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.190783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.191036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.191054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.191216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.191234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.191492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.191511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.191621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.191639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.191813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.191830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.192044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.192074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.192332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.192362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.192501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.192529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.192733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.192761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.192913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.192942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.193066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.193094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.193287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.193317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.193455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.193485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.193758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.193786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.193968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.193986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.194172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.194213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.194423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.194454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.485 [2024-07-15 12:59:48.194582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.485 [2024-07-15 12:59:48.194611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.485 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.194744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.194773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.195035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.195064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.195296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.195315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.195492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.195510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.195678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.195696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.195875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.195892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.196065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.196083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.196180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.196196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.196365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.196580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.196598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.196784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.196813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.197070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.197098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.197239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.197289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.197433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.197462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.197726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.197756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.197896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.197925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.198940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.198960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.199942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.199960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.200146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.200163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.200367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.200394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.200562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.200581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.200769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.200787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.200985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.201024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.201193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.201223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.201424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.201455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.201735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.201764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.201975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.202005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.202267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.202298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.202417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.202436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.202529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.202546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.202729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.202747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.203002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.486 [2024-07-15 12:59:48.203020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.486 qpair failed and we were unable to recover it. 00:29:56.486 [2024-07-15 12:59:48.203251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.203274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.203376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.203394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.203569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.203587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.203694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.203712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.203872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.203891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.204937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.204966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.205157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.205187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.205445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.205476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.205698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.205727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.205854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.205883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.206097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.206127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.206387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.206405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.206524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.206542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.206707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.206725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.206916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.206934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.207194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.207212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.207446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.207464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.207578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.207595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.207719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.207737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.207911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.207948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.208138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.208168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.208296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.208327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.208518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.208547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.208659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.208905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.208934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.209156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.209173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.209384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.209403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.209577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.209613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.209742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.209771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.209975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.210127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.210351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.210474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.210666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.210946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.210964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.211061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.211079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.211245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.211267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.211361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.211378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.211584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.487 [2024-07-15 12:59:48.211602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.487 qpair failed and we were unable to recover it. 00:29:56.487 [2024-07-15 12:59:48.211764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.211781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.211985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.212015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.212221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.212251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.212398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.212427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.212640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.212669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.212889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.212919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.213115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.213284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.213303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.213502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.213520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.213679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.213697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.213893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.213911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.214839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.214856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.215056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.215074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.215239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.215263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.215444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.215462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.215631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.215652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.215892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.215922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.216119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.216149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.216310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.216560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.216589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.216772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.216790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.216894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.216912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.217135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.217152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.217336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.217355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.217531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.217560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.217706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.217736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.218035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.218065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.218280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.218299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.218546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.218564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.218668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.218686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.218782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.218799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.488 [2024-07-15 12:59:48.219005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.488 [2024-07-15 12:59:48.219023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.488 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.219138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.219155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.219241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.219264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.219496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.219514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.219622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.219640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.219909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.219939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.220146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.220175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.220432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.220462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.220757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.220786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.220940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.220957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.221862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.221880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.222923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.222942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.223870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.223888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.224874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.224892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.225070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.225104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.225296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.225327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.225525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.225554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.489 qpair failed and we were unable to recover it. 00:29:56.489 [2024-07-15 12:59:48.225695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.489 [2024-07-15 12:59:48.225724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.225980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.225998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.226227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.226244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.226352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.226371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.226463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.226482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.226769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.226787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.226974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.226991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.227116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.227302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.227420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.227622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.227802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.227964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.228006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.228213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.228244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.228467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.228497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.228693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.228722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.228850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.228888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.229928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.229946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.230140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.230169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.230453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.230483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.230716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.230746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.231014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.231048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.231184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.231213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.231514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.231544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.231732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.231762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.231948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.231977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.232199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.232228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.232448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.232477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.232669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.232699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.232823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.232840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.232957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.232975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.233237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.233275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.233479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.233498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.233608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.233626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.233785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.233803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.234973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.235089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.235107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.235385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.490 [2024-07-15 12:59:48.235404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.490 qpair failed and we were unable to recover it. 00:29:56.490 [2024-07-15 12:59:48.235511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.235529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.235755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.235773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.236941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.236958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.237964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.237981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.238933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.238962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.239151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.239186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.239351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.239369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.239480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.239497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.239688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.239706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.239936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.239965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.240195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.240224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.240422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.240452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.240601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.240630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.240847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.240877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.241055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.241095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.241276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.241295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.241527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.241544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.241656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.241674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.241933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.241951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.242139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.242157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.242248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.242272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.242475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.242494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.242657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.242674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.242952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.242982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.243186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.243215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.243437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.243468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.243666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.243696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.243969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.243987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.244141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.244159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.244323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.244342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.244450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.244468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.244678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.244696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.491 [2024-07-15 12:59:48.244901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.491 [2024-07-15 12:59:48.244930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.491 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.245130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.245159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.245272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.245302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.245492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.245521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.245714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.245743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.245875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.245904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.246038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.246055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.246223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.246241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.246420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.246438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.246630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.246648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.246826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.246844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.247008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.247029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.247266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.247284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.247530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.247548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.247722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.247739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.247847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.247865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.248093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.248111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.248222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.248240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.248407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.248426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.248668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.248696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.248823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.248852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.249109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.249138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.249322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.249341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.249614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.249632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.249797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.249815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.249933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.249951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.250901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.250918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.251172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.251190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.492 [2024-07-15 12:59:48.251307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.492 [2024-07-15 12:59:48.251325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.492 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.251417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.251434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.251666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.251683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.251943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.251961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.252902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.252919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.253894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.253911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.254888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.254905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.255100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.255117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.255242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.255265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.255372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.255389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.255554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.255572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.255803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.255821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.256071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.256186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.256795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.256971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.257001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.257193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.257222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.257532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.257563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.257789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.257818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.258075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.258105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.258360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.258391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.258597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.258626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.258885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.258914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.259146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.259175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.259378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.259397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.259633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.259662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.259871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.259899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.260036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.260066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.493 [2024-07-15 12:59:48.260270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.493 [2024-07-15 12:59:48.260302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.493 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.260502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.260531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.260725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.260755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.260888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.260906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.261187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.261321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.261440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.261568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.261750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.261990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.262007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.262212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.262229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.262534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.262552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.262659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.262677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.262776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.262797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.263047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.263065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.263252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.263281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.263444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.263462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.263625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.263643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.263827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.263855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.264067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.264096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.264379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.264408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.264614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.264643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.264832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.264849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.264942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.264959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.265879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.265897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.266005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.266023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.266229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.266247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.266449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.266467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.266710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.266728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.266916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.266934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.267169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.267187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.267370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.267389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.267553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.267571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.267750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.267779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.267962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.267991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.268193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.494 [2024-07-15 12:59:48.268231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.494 qpair failed and we were unable to recover it. 00:29:56.494 [2024-07-15 12:59:48.268402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.268420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.268599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.268617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.268717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.268734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.268904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.268922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.269860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.269876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.270039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.270057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.270269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.270300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.270441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.270476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.270693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.270722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.270851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.270880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.271136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.271166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.271385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.271417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.271708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.271726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.271879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.271897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.272902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.272920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.273151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.273169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.273408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.273439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.273564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.273594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.273736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.273765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.274033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.274063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.274238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.274261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.274527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.274545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.274719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.274748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.274945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.274974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.275094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.275124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.275238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.275277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.275424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.275453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.275713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.275743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.275859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.275888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.276016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.276046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.276238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.276277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.276573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.276601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.276793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.276822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.276951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.277269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.277288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.277492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.277510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.277698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.277715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.277879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.277897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.495 [2024-07-15 12:59:48.278018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-15 12:59:48.278036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.495 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.278137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.278156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.278273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.278291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.278546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.278564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.278671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.278692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.278883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.278901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.279076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.279094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.279213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.279231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.279363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.279381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.279548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.279565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.279752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.279781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.280875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.280893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.281023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.281308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.281501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.281679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.281871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.281982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.282000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.282172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.282190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.282368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.282399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.282594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.282623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.282826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.282855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.283069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.283087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.283290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.283320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.283525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.283554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.283746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.283776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.284024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.284249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.284287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.284482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.284511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.284711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.284741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.285029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.285058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.285349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.285368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.496 qpair failed and we were unable to recover it. 00:29:56.496 [2024-07-15 12:59:48.285591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-15 12:59:48.285609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.285769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.285787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.285973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.286002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.286131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.286160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.286372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.286402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.286688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.286718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.286927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.286956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.287162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.287182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.287374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.287392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.287579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.287609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.287737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.287766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.288053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.288082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.288362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.288394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.288845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.288874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.289065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.289095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.289293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.289325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.289611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.289640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.289842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.289872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.290059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.290089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.290323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.290353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.290634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.290865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.290894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.291091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.291109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.291272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.291291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.291506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.291535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.291792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.291821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.291964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.291981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.292141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.292157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.292330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.292347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.292584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.292600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.292800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.292816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.292927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.292943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.293118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.293134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.293248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.293271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.293499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.293565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.293787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.293817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.294954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.295957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.295974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.296087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.296105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.296363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.296381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.296597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.296614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.296775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.296792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.497 [2024-07-15 12:59:48.296914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-15 12:59:48.296931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.497 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.297970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.297987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.298099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.298116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.298214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.298230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.298385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.298402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.298655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.298672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.298858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.298875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.299958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.299976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.300160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.300190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.300310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.300341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.300539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.300569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.300767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.300797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.301011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.301045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.301181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.301216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.301412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.301431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.301553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.301571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.301837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.301866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.302115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.302144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.302347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.302377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.302569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.302599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.302806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.302835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.303831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.303849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.304149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.304167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.304344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.304370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.304543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.304561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.304737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.304767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.304972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.305002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.305191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.305221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.305439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.305457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.305636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.305653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.305893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.305923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.306113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.306142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.306362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.306393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.306684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.306713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.306909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.306938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.307087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.307117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.307262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.307293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.307491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.307509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.307741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.307759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.307990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.308007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.498 qpair failed and we were unable to recover it. 00:29:56.498 [2024-07-15 12:59:48.308128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-15 12:59:48.308146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.308312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.308330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.308587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.308605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.308709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.308727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.308834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.308852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.309015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.309033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.309138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.309156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.309455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.309477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.309572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.309590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.309850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.310028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.310046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.310239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.310261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.310385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.310403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.310578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.310611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.310870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.310900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.311086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.311104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.311289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.311308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.311543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.311573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.311842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.311871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.312012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.312041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.312291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.312310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.312568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.312587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.312841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.312859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.313014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.313032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.313238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.313275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.313417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.313447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.313579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.313608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.313741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.313770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.314041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.314070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.314268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.314286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.314393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.314411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.314672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.314689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.314893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.314910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.315875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.315904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.316159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.316189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.316412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.316442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.316673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.316702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.316899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.316929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.317123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.317265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.317284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.317540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.317558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.317720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.317740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.317908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.317926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.318100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.318117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.318245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.318267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.318364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.318381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.318547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.318565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.318797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.318815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.499 [2024-07-15 12:59:48.319022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.499 [2024-07-15 12:59:48.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.499 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.319267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.319297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.319432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.319461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.319724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.319753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.319944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.319973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.320181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.320210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.320349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.320368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.320601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.320619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.320796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.320813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.320929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.320947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.321043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.321060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.321223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.321240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.321436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.321455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.321618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.321636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.321870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.321898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.322101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.322358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.322392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.322602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.322620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.322727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.322745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.322850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.322868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.323074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.323104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.323281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.323312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.323498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.323527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.323662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.323692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.323963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.323992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.324178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.324207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.324396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.324414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.324519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.324537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.324796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.324814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.324925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.324943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.325115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.325133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.325357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.325387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.325655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.500 [2024-07-15 12:59:48.325685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.500 qpair failed and we were unable to recover it. 00:29:56.500 [2024-07-15 12:59:48.325891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.325926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.326153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.326171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.326331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.326350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.326541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.326569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.326770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.326799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.327003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.327032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.327236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.327285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.327542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.327571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.327777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.327807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.327989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.328018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.328205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.328234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.328507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.328525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.328617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.328636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.328910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.328928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.329202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.329231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.329442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.329472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.329608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.329773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.329802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.329996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.330025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.330231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.330270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.330608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.330685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.330911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.330944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.331086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.331116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.331428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.331459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.331592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.331621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.331824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.331853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.332109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.332139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.332461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.332492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.332626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.332655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.332787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.332816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.333062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.333091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.333219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.333248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.333487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.333518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.333732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.333761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.334045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.334074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.334293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.334314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.334593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.334611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.334786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.334803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.334919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.334938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.335165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.335360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.335382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.335491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.335509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.335687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.335705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.335864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.335881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.336082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.336111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.336302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.336332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.336486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.336515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.336800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.336829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.337127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.337156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.337292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.337310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.337433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.337450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.337721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.337739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.337933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.337950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.338132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.338150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.338268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.338286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.338516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.501 [2024-07-15 12:59:48.338534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.501 qpair failed and we were unable to recover it. 00:29:56.501 [2024-07-15 12:59:48.338709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.338726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.338909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.338927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.339133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.339162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.339425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.339456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.339678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.339707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.339963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.339993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.340317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.340336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.340557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.340587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.340774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.340804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.341028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.341057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.341267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.341285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.341471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.341489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.341730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.341747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.342024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.342041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.342178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.342196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.342424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.342455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.342694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.342724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.343033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.343062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.343279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.343310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.343459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.343489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.343704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.343733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.343939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.344152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.344181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.344392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.344411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.344641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.344661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.344831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.344848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.345033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.345062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.345207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.345236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.345526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.345556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.345756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.345786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.345924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.345954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.346138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.346167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.346366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.346396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.346639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.346657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.346831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.346860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.346994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.347023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.347215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.347244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.347409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.347444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.347702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.347720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.347882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.347900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.348094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.348123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.348317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.348348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.348535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.348564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.348865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.348895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.349084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.349113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.349314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.349344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.349474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.349492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.349599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.349617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.349785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.349802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.350065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.350083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.350332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.350351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.350469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.350487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.350720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.350738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.350895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.350913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.351144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.351162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.351260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.351279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.351457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.351474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.351751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.502 [2024-07-15 12:59:48.351769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-07-15 12:59:48.351931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.351949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.352908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.352929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.353164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.353181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.353343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.353362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.353620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.353638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.353813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.353831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.353996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.354014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.354259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.354278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.354451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.354469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.354581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.354598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.354860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.354878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.355147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.355164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.355327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.355465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.355483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.355655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.355673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.355907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.355925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.356096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.356114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.356240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.356264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.356436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.356454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.356621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.356639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.356893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.356911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.357116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.357145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.357323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.357354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.357544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.357573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.357720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.357748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.358036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.358063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.358308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.358509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.358527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.358643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.358660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.358906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.358923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.359762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.359993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.360189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.360436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.360575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.360765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.360912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.360933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.361108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.361125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.361300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.361319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-07-15 12:59:48.361441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.503 [2024-07-15 12:59:48.361458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.361569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.361587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.361750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.361768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.361941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.362868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.362985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.363904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.363922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.364024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.364042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.364221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.364239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.364552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.364569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.364667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.364684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.364793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.364811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.365093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.365122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.365245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.365285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.365448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.365756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.365785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.365982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.366012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.366289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.366308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.366552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.366570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.366768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.366798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.366986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.367273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.367439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.367582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.367762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.367944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.367962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.368150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.368168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.368388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.368409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.368600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.368617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.368724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.368741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.368938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.368956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.369047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.369063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.369296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.369315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.369499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.369517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.369695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.369712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.369879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.369897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.370049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.370079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.370272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.370302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.370440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.370470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.370666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.370684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.370963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.370980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.371953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.504 [2024-07-15 12:59:48.371972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-07-15 12:59:48.372140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.372275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.372385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.372571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.372685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.372905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.372934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.373129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.373160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.373300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.373330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.373600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.373619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.373756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.373791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.373912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.373941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.374886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.374995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.375013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.375128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.375145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.375306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.375325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.375486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.375507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.791 [2024-07-15 12:59:48.375625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.791 [2024-07-15 12:59:48.375644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.791 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.375876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.375894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.376849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.376878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.377936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.377966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.378099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.378128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.378389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.378408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.378499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.378515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.378677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.378695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.378869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.378887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.379946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.379962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.380901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.380919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.381943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.381960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.382148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.382166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.382278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.382299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.382564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.792 [2024-07-15 12:59:48.382582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.792 qpair failed and we were unable to recover it. 00:29:56.792 [2024-07-15 12:59:48.382689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.382707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.382957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.382975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.383871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.383889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.384944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.384962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.385836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.385854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.386078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.386269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.386452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.386660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.386832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.386994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.387117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.387340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.387521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.387702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.387979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.387997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.388908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.388925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.389040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.389058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.389154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.389175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.389338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.389358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.793 [2024-07-15 12:59:48.389543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.793 [2024-07-15 12:59:48.389579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.793 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.389716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.389745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.390885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.390984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.391889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.392077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.392094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.392265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.392283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.392480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.392498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.392756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.392774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.392948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.392966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.393155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.393184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.393451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.393483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.393691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.393721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.393912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.393929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.394053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.394070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.394248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.394272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.394435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.394453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.394715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.394745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.394976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.395006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.395201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.395451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.395470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.395566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.395586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.395741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.395758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.395996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.396014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.396192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.396210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.396409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.396428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.396553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.396570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.396805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.396833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.397034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.397062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.397368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.397398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.397515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.397532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.794 qpair failed and we were unable to recover it. 00:29:56.794 [2024-07-15 12:59:48.397783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.794 [2024-07-15 12:59:48.397801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.397906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.397924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.398155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.398172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.398345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.398545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.398586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.398726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.398754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.398885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.398915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.399118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.399146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.399430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.399448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.399553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.399570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.399674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.399692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.399860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.399878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.400910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.400926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.401157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.401175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.401300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.401319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.401551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.401569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.401806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.401835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.401973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.402195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.402374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.402608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.402755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.402904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.402934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.403132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.403161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.403277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.403306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.403491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.403521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.403706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.403736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.795 [2024-07-15 12:59:48.403947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.795 [2024-07-15 12:59:48.403965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.795 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.404129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.404168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.404391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.404422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.404744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.404762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.404868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.404885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.405954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.405971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.406968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.406986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.407216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.407234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.407538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.407556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.407729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.407746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.407920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.407937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.408243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.408267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.408378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.408403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.408509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.408526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.408686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.408705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.408966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.408984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.409224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.409241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.409424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.409443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.409648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.409678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.409812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.409841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.410044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.410073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.410208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.410243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.410443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.410473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.410667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.410696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.410818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.410847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.411034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.411063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.411181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.411210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.411417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.411448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.411642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.411660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.411860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.796 [2024-07-15 12:59:48.411888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.796 qpair failed and we were unable to recover it. 00:29:56.796 [2024-07-15 12:59:48.412192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.412221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.412453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.412484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.412753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.412771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.412951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.412969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.413136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.413154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.413413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.413672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.413701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.413842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.413871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.414076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.414106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.414353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.414384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.414592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.414621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.414743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.414938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.414956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.415215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.415233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.415441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.415459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.415639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.415657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.415822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.415839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.416915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.416933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.417131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.417149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.417267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.417285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.417469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.417487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.417610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.417628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.417812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.417830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.418033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.418050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.418175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.418192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.418375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.418416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.418603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.418638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.418839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.418868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.419069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.419235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.419258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.419438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.419456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.419715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.419744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.419934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.419963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.420168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.420197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.420334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.420365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.420554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.797 [2024-07-15 12:59:48.420571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.797 qpair failed and we were unable to recover it. 00:29:56.797 [2024-07-15 12:59:48.420746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.420764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.420920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.420957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.421165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.421195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.421449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.421673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.421691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.421812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.421830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.421939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.421957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.422215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.422233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.422324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.422341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.422514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.422532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.422766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.422796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.422985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.423014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.423294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.423325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.423519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.423538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.423707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.423725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.423908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.423938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.424191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.424221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.424517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.424548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.424658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.424676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.424840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.424858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.425055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.425084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.425282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.425312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.425510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.425540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.425685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.425703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.425875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.425893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.426022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.426213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.426412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.426592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.426805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.426991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.427024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.427243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.427284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.427569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.427826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.427844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.428044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.428063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.428294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.428313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.428419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.428436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.428669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.428686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.428919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.428937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.429046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.429063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.429235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-07-15 12:59:48.429260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-15 12:59:48.429385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.429403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.429638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.429681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.429825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.429854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.430129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.430158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.430370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.430388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.430595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.430624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.430751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.430780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.430923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.430952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.431082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.431111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.431386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.431405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.431638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.431656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.431884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.431902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.432876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.432897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.433093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.433112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.433345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.433662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.433690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.433974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.434003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.434181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.434210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.434453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.434484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.434791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.434809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.434936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.434954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.435196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.435225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.435434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.435464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.435578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.435607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.435745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.435766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.435965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.435994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.436214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.436244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.436444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.436474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.436663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.436681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.436877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.436896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.437055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.437073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.437175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.437192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.437363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.437560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.437577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-15 12:59:48.437808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-07-15 12:59:48.437826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.437987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.438105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.438285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.438551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.438803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.438960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.438988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.439178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.439208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.439354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.439372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.439470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.439488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.439697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.439715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.439905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.439923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.440104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.440121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.440396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.440414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.440534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.440551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.440722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.440740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.441001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.441018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.441253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.441277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.441563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.441582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.441766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.441784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.441978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.441996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.442188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.442217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.442451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.442482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.442787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.442816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.443031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.443060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.443266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.443298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.443571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.443589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.443774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.443792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.444048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.444078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.444196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.444224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.444486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.444521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.444647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.444676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.444931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.444961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.445247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.445287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.445494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.445525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.445753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.445771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-15 12:59:48.445959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-07-15 12:59:48.445976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.446284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.446303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.446468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.446486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.446652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.446671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.446866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.446894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.447050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.447079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.447288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.447319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.447516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.447545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.447693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.447723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.448018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.448282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.448417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.448649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.448833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.448976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.449005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.449208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.449237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.449382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.449400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.449527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.449545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.449716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.449750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.450938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.450956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.451092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.451288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.451591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.451727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.451865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.451985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.452103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.452223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.452429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.452680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.452881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.452899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.453072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.453089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.453259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.453277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.453465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.453483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.453716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.453896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.801 [2024-07-15 12:59:48.453914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.801 qpair failed and we were unable to recover it. 00:29:56.801 [2024-07-15 12:59:48.454148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.454178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.454309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.454340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.454493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.454522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.454673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.454714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.454822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.454839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.455945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.455975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.456183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.456213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.456479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.456498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.456729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.456747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.456909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.456926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.457102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.457321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.457350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.457609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.457637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.457840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.458135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.458165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.458344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.458374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.458607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.458636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.458827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.458845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.458952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.458970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.459135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.459152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.459274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.459293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.459495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.459684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.459713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.459905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.459933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.460133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.460161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.460376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.460406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.460593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.460624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.460901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.460936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.461058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.461088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.461245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.461283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.461423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.461456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.461712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.461731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.461908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.461926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.462046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.462064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.462169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.802 [2024-07-15 12:59:48.462186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.802 qpair failed and we were unable to recover it. 00:29:56.802 [2024-07-15 12:59:48.462349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.462368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.462529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.462546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.462656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.462673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.462839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.462858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.463029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.463058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.463274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.463304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.463502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.463531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.463738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.463756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.463925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.463943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.464103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.464120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.464303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.464321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.464551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.464571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.464694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.464712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.464830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.464850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.465903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.465922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.466961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.466979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.467150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.467168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.467355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.467373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.467531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.467549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.467731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.467761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.467957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.467987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.468177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.468207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.468400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.468421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.468611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.468641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.468757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.468785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.803 [2024-07-15 12:59:48.469928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.803 [2024-07-15 12:59:48.469946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.803 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.470121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.470139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.470301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.470320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.470423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.470613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.470631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.470824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.470842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.471843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.471872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.472145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.472176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.472386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.472404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.472566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.472584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.472726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.472759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.472893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.472921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.473059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.473089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.473304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.473334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.473568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.473598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.473825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.473844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.474872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.474995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.475014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.475176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.475195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.475368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.475386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.475630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.475861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.475891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.476019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.476054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.476250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.476299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.476561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.476591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.804 [2024-07-15 12:59:48.476783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.804 [2024-07-15 12:59:48.476812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.804 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.476939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.476968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.477092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.477122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.477315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.477346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.477569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.477599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.477820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.477839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.478888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.478918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.479118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.479148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.479280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.479299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.479398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.479416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.479676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.479694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.479803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.479821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.480030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.480066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.480204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.480234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.480453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.480485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.480749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.480778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.480912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.480942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.481251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.481488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.481507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.481766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.481797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.482945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.482963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.483063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.483080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.483242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.483266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.483472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.483502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.483721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.483750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.483888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.483917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.484127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.484156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.484302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.484337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.484559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.484589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.484710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.484740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.484874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.484892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.805 qpair failed and we were unable to recover it. 00:29:56.805 [2024-07-15 12:59:48.485049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.805 [2024-07-15 12:59:48.485066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.485237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.485260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.485524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.485543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.485652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.485670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.485828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.485845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.485939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.485957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.486164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.486182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.486412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.486431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.486636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.486654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.486753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.486770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.487938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.487955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.488069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.488087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.488279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.488298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.488530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.488549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.488734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.488752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.488949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.488967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.489073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.489090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.489198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.489216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.489457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.489477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.489659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.489695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.489896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.489925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.490129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.490295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.490324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.490472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.490502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.490694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.490723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.490978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.490996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.491082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.491101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.491267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.491287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.491481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.491510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.491643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.491671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.491806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.491834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.492096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.492129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.492386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.492418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.492607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.492636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.492890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.492919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.493197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.806 [2024-07-15 12:59:48.493227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.806 qpair failed and we were unable to recover it. 00:29:56.806 [2024-07-15 12:59:48.493365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.493396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.493632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.493662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.493894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.493928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.494136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.494165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.494358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.494388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.494642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.494660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.494784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.494801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.494964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.494982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.495967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.495984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.496074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.496091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.496286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.496305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.496411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.496428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.496698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.496715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.496876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.496894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.497115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.497132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.497295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.497313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.497408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.497426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.497556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.497574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.497848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.497868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.498103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.498132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.498279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.498309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.498430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.498459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.498680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.498708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.498967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.498985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.499155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.499172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.499407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.499426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.499524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.499717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.499733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.499902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.499930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.500960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.501092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.807 [2024-07-15 12:59:48.501109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.807 qpair failed and we were unable to recover it. 00:29:56.807 [2024-07-15 12:59:48.501288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.501308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.501436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.501454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.501556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.501574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.501735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.501754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.501913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.501932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.502943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.502981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.503120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.503149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.503285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.503316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.503457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.503486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.503711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.503728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.503892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.503909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.504068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.504086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.504220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.504248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.504440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.504684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.504713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.504905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.504933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.505192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.505227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.505512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.505544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.505833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.505863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.506142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.506172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.506388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.506420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.506556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.506574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.506748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.506790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.506995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.507024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.507336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.507367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.507566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.507583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.507705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.507819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.507837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.508030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.508048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.808 qpair failed and we were unable to recover it. 00:29:56.808 [2024-07-15 12:59:48.508158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.808 [2024-07-15 12:59:48.508175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.508289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.508309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.508426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.508444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.508537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.508555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.508733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.508750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.508922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.508940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.509818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.509987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.510165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.510366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.510523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.510788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.510948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.510965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.511963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.511981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.512084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.512101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.512274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.512293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.512486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.512504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.512683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.512704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.512882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.512911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.513165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.513194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.513323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.513352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.513590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.513620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.513877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.513906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.514123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.514141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.514318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.514337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.514454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.514471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.514723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.514739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.514864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.514881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.515156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.515174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.515291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.515309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.515401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.809 [2024-07-15 12:59:48.515418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.809 qpair failed and we were unable to recover it. 00:29:56.809 [2024-07-15 12:59:48.515511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.515529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.515759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.515777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.515901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.515918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.516216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.516244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.516498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.516527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.516715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.516743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.516947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.516976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.517104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.517133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.517419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.517450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.517572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.517610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.517702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.517719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.517845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.517864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.518099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.518117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.518217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.518234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.518434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.518453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.518634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.518651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.518915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.518944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.519148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.519177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.519378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.519408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.519539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.519569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.519715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.519743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.519940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.520184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.520203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.520332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.520350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.520518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.520535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.520638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.520657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.520840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.520858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.521062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.521092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.521297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.521328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.521521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.521748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.521776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.521898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.521926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.522114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.522143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.522355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.522385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.522521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.522550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.522800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.522828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.523016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.523044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.523246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.523497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.523527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.523663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.810 [2024-07-15 12:59:48.523692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.810 qpair failed and we were unable to recover it. 00:29:56.810 [2024-07-15 12:59:48.523948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.523977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.524248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.524286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.524415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.524444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.524656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.524685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.524819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.524847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.525049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.525068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.525303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.525322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.525486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.525504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.525686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.525704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.525887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.525905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.526039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.526301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.526541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.526720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.526869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.526993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.527887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.527905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.528115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.528145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.528298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.528328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.528554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.528585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.528707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.528727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.528930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.528949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.530450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.530487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.530698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.530961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.530992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.531181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.531210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.531414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.531445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.531648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.531666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.531788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.531805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.531977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.531995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.532266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.532298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.532426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.532456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.532653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.532683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.532822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.532840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.533036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.533054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.811 [2024-07-15 12:59:48.533231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.811 [2024-07-15 12:59:48.533248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.811 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.533367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.533386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.533476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.533493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.533617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.533636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.533735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.533752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.533945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.533975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.534117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.534287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.534318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.534515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.534544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.534800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.534829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.535023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.535040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.535297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.535329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.535474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.535504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.535701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.535731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.535899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.535916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.536144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.536173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.536384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.536414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.536612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.536641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.536766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.536784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.536995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.537024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.537280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.537312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.537501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.537530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.537714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.537742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.537946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.537975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.538179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.538207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.538434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.538469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.538656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.538673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.538841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.538860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.539034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.539063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.539295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.539327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.539464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.539494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.539640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.539670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.539866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.539884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.540094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.540113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.540296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.812 [2024-07-15 12:59:48.540314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.812 qpair failed and we were unable to recover it. 00:29:56.812 [2024-07-15 12:59:48.540438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.540456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.540568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.540586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.540763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.540793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.540996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.541172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.541202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.541402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.541433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.541621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.541650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.541872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.541911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.542074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.542093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.542266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.542285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.542391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.542408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.542605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.542623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.542746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.542764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.543024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.543053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.543250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.543290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.543549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.543579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.543717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.543735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.543914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.543931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.544974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.544992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.545265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.545284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.545397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.545415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.545576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.545593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.545840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.545859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.546890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.546908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.813 [2024-07-15 12:59:48.547825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.813 [2024-07-15 12:59:48.547843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.813 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.548959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.548977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.549233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.549251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.549462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.549481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.549590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.549608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.549840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.549858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.549989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.550787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.550971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.551133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.551303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.551531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.551689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.551919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.551948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.552908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.553080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.553100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.553233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.553251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.553411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.553441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.553660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.553690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.553891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.553910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.554035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.554073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.554223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.554252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.554450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.554481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.554640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.554669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.554923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.554952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.555077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.555095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.814 qpair failed and we were unable to recover it. 00:29:56.814 [2024-07-15 12:59:48.555275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-07-15 12:59:48.555294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.555457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.555475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.555664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.555694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.555884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.555915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.556130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.556159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.556297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.556328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.556455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.556485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.556691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.556721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.556872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.556890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.557892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.557910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.558942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.558959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.559061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.559078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.559274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.559346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.559508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.559541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.559755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.559786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.559980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.560011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.560133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.560163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.560302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.560333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.560528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.560558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.560683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.560722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.560982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.561012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.561238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.561280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.561404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.561432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.561639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.561670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.561880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.561909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.562052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.562082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.562216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.562248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.562395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.562429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.562665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.562684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.562814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.562833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.563000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.563018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.815 qpair failed and we were unable to recover it. 00:29:56.815 [2024-07-15 12:59:48.563192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.815 [2024-07-15 12:59:48.563210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.563378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.563398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.563519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.563537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.563641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.563659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.563767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.563787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.563954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.563970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.564150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.564169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.564326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.564344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.564448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.564467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.564629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.564647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.564750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.564768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.565965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.565983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.566933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.566951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.567884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.567902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.568029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.568047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.568217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.568235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.568421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.568440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.568643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.568661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.568761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.568778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.569971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.569989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.816 qpair failed and we were unable to recover it. 00:29:56.816 [2024-07-15 12:59:48.570272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.816 [2024-07-15 12:59:48.570291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.570400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.570418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.570503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.570520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.570633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.570652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.570753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.570771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.570957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.571912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.571929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.572023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.572042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.572311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.572330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.572430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.572451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.572622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.572640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.572820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.572838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.573031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.573324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.573543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.573874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.573983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.574948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.574966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.575185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.575273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.575414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.575448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.575668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.575688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.575786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.575804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.576086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.576104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.576195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.817 [2024-07-15 12:59:48.576212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.817 qpair failed and we were unable to recover it. 00:29:56.817 [2024-07-15 12:59:48.576309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.576415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.576433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.576619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.576637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.576746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.576764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.576928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.576946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.577948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.578152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.578170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.578405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.578424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.578530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.578548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.578674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.578692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.578924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.578942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.579143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.579425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.579444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.579568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.579586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.579819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.580916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.580933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.581128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.581146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.581380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.581398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.581582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.581600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.581801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.581819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.581940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.581958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.582216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.582233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.582427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.582446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.582550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.582568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.582751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.582768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.583014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.583032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.583297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.583327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.583444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.583474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.583756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.583787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.584044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.818 [2024-07-15 12:59:48.584062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.818 qpair failed and we were unable to recover it. 00:29:56.818 [2024-07-15 12:59:48.584303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.584334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.584592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.584622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.584798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.584827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.585932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.585966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.586170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.586199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.586343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.586373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.586579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.586607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.586807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.586837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.586956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.586986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.587170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.587188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.587372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.587390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.587584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.587614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.587753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.587783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.587914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.587949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.588233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.588270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.588412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.588581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.588611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.588837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.588867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.589162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.589374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.589392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.589491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.589510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.589793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.589822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.590034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.590064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.590321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.590352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.590541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.590570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.590774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.590803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.591848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.591866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.592065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.592083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.592314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.592333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.592430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.592448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.592631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.819 [2024-07-15 12:59:48.592649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.819 qpair failed and we were unable to recover it. 00:29:56.819 [2024-07-15 12:59:48.592855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.592873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.593047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.593065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.593230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.593292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.593434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.593464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.593674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.593704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.593828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.593858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.594068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.594087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.594201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.594219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.594423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.594441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.594700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.594730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.594954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.594984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.595189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.595207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.595313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.595331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.595438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.595456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.595689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.595707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.595893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.595911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.596806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.596823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.597982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.597999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.598177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.598195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.598297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.598316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.598626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.598644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.598820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.598838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.599020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.599038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.599272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.599290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.599455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.599635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.599653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.599912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.600056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.600074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.600241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.600265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.820 [2024-07-15 12:59:48.600518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.820 [2024-07-15 12:59:48.600536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.820 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.600696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.600714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.600829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.600846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.600985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.601104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.601313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.601486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.601738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.601937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.601955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.602950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.602980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.603117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.603147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.603350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.603380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.603637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.603673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.603802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.603831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.604031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.604060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.604246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.604283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.604479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.604508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.604638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.604667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.604810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.604840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.605954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.605972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.606088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.606106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.606207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.606225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.606434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.606452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.606562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.606580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.821 [2024-07-15 12:59:48.606692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.821 [2024-07-15 12:59:48.606710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.821 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.606905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.606923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.607961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.607990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.608176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.608205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.608404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.608471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.608627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.608660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.608802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.608832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.608951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.608980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.609966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.609983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.610101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.610119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.610210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.610228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.610350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.610369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.610534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.610556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.610797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.610816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.611010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.611028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.611262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.611281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.611395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.611413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.612562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.612594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.612817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.612835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.612933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.612951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.613112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.613130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.613316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.613335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.613449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.613467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.613695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.613713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.613821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.613838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.822 qpair failed and we were unable to recover it. 00:29:56.822 [2024-07-15 12:59:48.614893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.822 [2024-07-15 12:59:48.614911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.615893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.615994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.616978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.616996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.617172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.617190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.618279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.618311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.618500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.618519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.618691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.618721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.618915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.618946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.619160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.619190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.619392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.619430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.619562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.619605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.619811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.619829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.620946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.620964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.621946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.621963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.622073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.622091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.622251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.622276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.622469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.622486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.622720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.622738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.622841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.622859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.823 qpair failed and we were unable to recover it. 00:29:56.823 [2024-07-15 12:59:48.623028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.823 [2024-07-15 12:59:48.623045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.623211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.623229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.623405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.623424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.623600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.623618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.623905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.623923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.624920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.624937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.625948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.625966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.626948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.626965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.627211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.627229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.627475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.627494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.627653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.627671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.627832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.627851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.628082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.628100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.628204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.628222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.628472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.628491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.628665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.628683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.628863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.628881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.629956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.629973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.630148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.630166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.824 qpair failed and we were unable to recover it. 00:29:56.824 [2024-07-15 12:59:48.630266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.824 [2024-07-15 12:59:48.630284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.630398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.630415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.630614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.630632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.630794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.630812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.630916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.630934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.631087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.631157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.631301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.631346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.631639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.631670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.631795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.631815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.631926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.631944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.632136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.632154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.632271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.632289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.632414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.632432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.632663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.632681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.632855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.632873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.633895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.633913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.634013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.634034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.634270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.634289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.634408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.634426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.634704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.634722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.634902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.635093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.635111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.635290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.635308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.635471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.635489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.635719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.635736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.635898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.635916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.636079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.636096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.636264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.636283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.636499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.636518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.636624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.636642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.825 qpair failed and we were unable to recover it. 00:29:56.825 [2024-07-15 12:59:48.636742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.825 [2024-07-15 12:59:48.636760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.636922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.636940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.637772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.637790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.638039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.638266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.638284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.638510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.638629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.638650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.638827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.638845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.639953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.639971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.640228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.640419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.640437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.640597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.640614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.640797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.640815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.641901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.641918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.642077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.642094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.642378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.642397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.642561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.642579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.642748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.642766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.642929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.642947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.643968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.643986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.644146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.826 [2024-07-15 12:59:48.644163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.826 qpair failed and we were unable to recover it. 00:29:56.826 [2024-07-15 12:59:48.644343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.644361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.644597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.644615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.644777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.644795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.644915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.644932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.645191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.645209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.645314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.645331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.645440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.645458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.645694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.645712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.645814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.645832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.646028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.646046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.646242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.646269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.646530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.646548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.646726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.646744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.646918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.646936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.647113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.647130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.647246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.647271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.647504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.647522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.647628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.647645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.647756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.647775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.648009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.648027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.648145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.648163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.648399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.648418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.648593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.648610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.648784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.648802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.649059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.649272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.649291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.649482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.649500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.649670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.649688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.649795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.649812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.652500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.652520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.652800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.652818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.652981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.652999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.653230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.653248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.653448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.653466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.653568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.653584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.653773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.653791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.653964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.653982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.654091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.654113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.654287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.654306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.654409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.654427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.654532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.827 [2024-07-15 12:59:48.654548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.827 qpair failed and we were unable to recover it. 00:29:56.827 [2024-07-15 12:59:48.654713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.654731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.654837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.655927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.655945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.656940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.656957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.657115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.657134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.657312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.657439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.657457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.657648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.657666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.657824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.657841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.658031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.658222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.658442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.658688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.658831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.658987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.659111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.659298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.659409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.659536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.659815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.660012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.660029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.660278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.660297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.660457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.660475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.660665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.660683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.660860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.661127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.661145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.661309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.661328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.661490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.661508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.661678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.661696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.661929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.662119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.828 [2024-07-15 12:59:48.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.828 qpair failed and we were unable to recover it. 00:29:56.828 [2024-07-15 12:59:48.662315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.662333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.662491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.662509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.662672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.662690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.662863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.662880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.663046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.663064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.663268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.663287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.663464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.663483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.663664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.663681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.663882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.663903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.664829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.664999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.665862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.665989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.666109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.666358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.666474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.666733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.666915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.666932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.667039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.667055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.667250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.667275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.667484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.667502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.667618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.667635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.667812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.667830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.829 [2024-07-15 12:59:48.668895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.829 [2024-07-15 12:59:48.668912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.829 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.669982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.669999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.670093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.670111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.670306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.670325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.670613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.670631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.670888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.670909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.671898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.671915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.672187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.672205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.672440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.672458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.672627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.672645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.672831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.672850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.673864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.673881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.674041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.674058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.674261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.674280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.674403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.674421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.674652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.674670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.674765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.674782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.675915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.675933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.676047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.830 [2024-07-15 12:59:48.676065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.830 qpair failed and we were unable to recover it. 00:29:56.830 [2024-07-15 12:59:48.676163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.676180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.676404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.676422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.676582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.676599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.676760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.676779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.676968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.676986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.677190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.677207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.677374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.677393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.677646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.677664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.677924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.677942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.678121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.678139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.678243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.678266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.678430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.678450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.678646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.678830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.678848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.679081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.679098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.679238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.679262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.679440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.679458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.679659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.679677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.679908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.679925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.680122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.680140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.680301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.680319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.680522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.680540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.680755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.680773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.680871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.680887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.681066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.681084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.681285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.681304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.681399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.681416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.681617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.681635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.681854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.681872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.682130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.682149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.682252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.682285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.682469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.682487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.682773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.682790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.683025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.683055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.683272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.683302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.683491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.683521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.683720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.683749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.683940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.683958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.684070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.684088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.684260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.684279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.684521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.684540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.684721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.684739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.684893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.684911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.685102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.831 [2024-07-15 12:59:48.685120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.831 qpair failed and we were unable to recover it. 00:29:56.831 [2024-07-15 12:59:48.685234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.685252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.685489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.685507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.685780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.685919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.685937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.686128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.686146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.686316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.686335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.686525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.686543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.686711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.686732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.686910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.686928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.687159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.687177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.687343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.687361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.687536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.687554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.687667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.687685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.687857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.687874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.688892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.688910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.689921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.689939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.690955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.690973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.691948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.691966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.692889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.832 [2024-07-15 12:59:48.692907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.832 qpair failed and we were unable to recover it. 00:29:56.832 [2024-07-15 12:59:48.693017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.693209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.693399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.693533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.693733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.693942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.693960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.694076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.694097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.694287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.694306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.694415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.694433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.694621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.694638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.694747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.694765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.695832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.695852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.696123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.696192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.696455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.696491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.696686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.696706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.696881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.696899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.697065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.697083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.697246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.697274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.697456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.697474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.697744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.697762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.697994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.698108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.698249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.698443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.698718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.698902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.698920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.699161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.699341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.699359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.699539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.699558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.699669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.699687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.699885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.699903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.700945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.700965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.833 qpair failed and we were unable to recover it. 00:29:56.833 [2024-07-15 12:59:48.701131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.833 [2024-07-15 12:59:48.701149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.701324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.701346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.701593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.701611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.701782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.701800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.701968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.701986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.702076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.702093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.702291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.702310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.702418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.702436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.702642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.702660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.702828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.702845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.703007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.703025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.703217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.703235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.703504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.703523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.703795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.703813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:56.834 [2024-07-15 12:59:48.703988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.834 [2024-07-15 12:59:48.704006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:56.834 qpair failed and we were unable to recover it. 00:29:57.110 [2024-07-15 12:59:48.704118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.110 [2024-07-15 12:59:48.704136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.110 qpair failed and we were unable to recover it. 00:29:57.110 [2024-07-15 12:59:48.704313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.704333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.704448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.704466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.704559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.704577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.704739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.704758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.704868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.704886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.705048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.705066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.705297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.705315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.705551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.705569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.705758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.705776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.706932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.706949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.707944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.707962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.708981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.708999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.709161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.709179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.709420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.709438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.709693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.709711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.709945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.709962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.710136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.710154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.710441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.710459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.710580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.710598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.710769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.710787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.711107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.711125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.711222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.711239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.711345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.711363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.711612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.111 [2024-07-15 12:59:48.711629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.111 qpair failed and we were unable to recover it. 00:29:57.111 [2024-07-15 12:59:48.711965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.711983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.712242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.712267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.712416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.712434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.712532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.712550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.712673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.712692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.712821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.712839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.713073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.713092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.713189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.713207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.713366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.713385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.713547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.713565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.713818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.713835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.714010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.714028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.714204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.714222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.714505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.714524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.714678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.714696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.714800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.714817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.715911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.715929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.716970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.716987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.717221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.717239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.717357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.717376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.717470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.717488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.717685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.717703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.717935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.717953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.718980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.718998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.719114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.719131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.719235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.719259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.112 qpair failed and we were unable to recover it. 00:29:57.112 [2024-07-15 12:59:48.719379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.112 [2024-07-15 12:59:48.719397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.719565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.719583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.719746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.719764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.719945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.719962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.720148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.720165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.720350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.720381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.720548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.720566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.720746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.720765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.720938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.720956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.721925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.721943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.722201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.722218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.722390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.722409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.722597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.722615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.722711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.722729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.722895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.722913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.723173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.723191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.723307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.723324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.723505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.723523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.723698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.723716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.723831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.723848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.724039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.724057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.724167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.724185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.724351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.724370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.724531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.724549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.724716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.724733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.725007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.725025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.725219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.725237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.725410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.725428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.725702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.725720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.725829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.725847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.726049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.726067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.726262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.726281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.726534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.726736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.726754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.726924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.726942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.727065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.727083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.727299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.727317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.727526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.727544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.727801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-15 12:59:48.727819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.113 qpair failed and we were unable to recover it. 00:29:57.113 [2024-07-15 12:59:48.727993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.728760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.728988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.729006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.729172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.729193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.729444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.729644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.729662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.729843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.729861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.730109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.730127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.730349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.730368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.730472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.730489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.730655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.730673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.730829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.730847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.731949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.731967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.732128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.732146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.732248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.732273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.732478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.732496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.732726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.732744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.733007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.733025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.733152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.733170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.733393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.733411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.733570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.733588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.733758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.734027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.734045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.734276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.734295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.734466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.734484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.734665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.734683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.734916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.734934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.735119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.735137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.735299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.735318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.735482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.735500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.735636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.735654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.735781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.735799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.114 [2024-07-15 12:59:48.736032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-15 12:59:48.736050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.114 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.736177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.736195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.736408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.736426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.736606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.736625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.736730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.736748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.736914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.736935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.737042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.737060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.737225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.737243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.737423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.737441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.737642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.737661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.737840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.737859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.738030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.738048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.738212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.738230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.738487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.738505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.738737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.738754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.738928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.738945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.739215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.739233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.739422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.739440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.739633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.739651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.739820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.739838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.739952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.739970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.740101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.740119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.740378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.740643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.740661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.740823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.740841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.740967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.740985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.741149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.741167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.741350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.741368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.741556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.741574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.741805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.741823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.741942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.741960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.742067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.742083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.742272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.742445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.742463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.742645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.742663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.742947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.742965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.743140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.743158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.743258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.115 [2024-07-15 12:59:48.743276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.115 qpair failed and we were unable to recover it. 00:29:57.115 [2024-07-15 12:59:48.743396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.743414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.743576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.743594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.743826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.743844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.744883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.744901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.745103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.745120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.745300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.745318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.745484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.745502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.745618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.745636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.745775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.745793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.746906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.746924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.747109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.747127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.747267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.747286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.747402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.747419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.747680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.747697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.747954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.747972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.748163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.748180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.748364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.748382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.748616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.748634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.748798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.748815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.748979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.748996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.749208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.749226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.749487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.749506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.749687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.749705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.749816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.749833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.750043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.750061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.750295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.750314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.750486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.750504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.750764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.750782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.750895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.116 [2024-07-15 12:59:48.750913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.116 qpair failed and we were unable to recover it. 00:29:57.116 [2024-07-15 12:59:48.751135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.751153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.751272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.751290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.751555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.751573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.751671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.751687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.751847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.751865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.752171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.752189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.752356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.752374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.752630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.752648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.752819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.752840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.752950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.752968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.753156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.753174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.753373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.753392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.753509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.753527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.753626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.753642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.753821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.753839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.754016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.754034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.754195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.754213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.754376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.754394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.754561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.754579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.754843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.754861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.755921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.755939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.756101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.756119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.756291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.756310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.756511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.756528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.756620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.756637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.756869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.756887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.757079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.757097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.757198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.757218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.757475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.757493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.757690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.757708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.757886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.757903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.758136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.758154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.758330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.758348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.758635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.758653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.758764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.758782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.758874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.117 [2024-07-15 12:59:48.758891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-07-15 12:59:48.759129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.759198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.759525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.759561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.759764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.759795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.759951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.759981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.760168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.760188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.760289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.760306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.760510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.760528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.760702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.760723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.760885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.760903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.761966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.761984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.762943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.762961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.763774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.763791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.764921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.764939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.765170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.765188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.765448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.765467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.765579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.765597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.765838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.765856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.765950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.765968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.766130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.766148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.766331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.766350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-07-15 12:59:48.766457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.118 [2024-07-15 12:59:48.766474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.766704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.766722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.767007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.767025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.767215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.767233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.767499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.767518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.767627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.767645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.767817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.767835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.768066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.768086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.768284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.768303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.768469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.768488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.768688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.768706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.768893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.768911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.769096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.769290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.769489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.769612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.769800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.769992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.770010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.770242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.770264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.770445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.770463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.770726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.770744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.770942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.770960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.771164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.771182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.771292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.771309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.771470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.771488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.771750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.771768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.771884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.771902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.772949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.772967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.773082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.773099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.773213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.119 [2024-07-15 12:59:48.773230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-07-15 12:59:48.773508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.773526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.773793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.773811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.773981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.773999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.774273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.774291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.774473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.774492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.774678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.774696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.774801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.774821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.775133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.775337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.775445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.775694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.775862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.775989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.776010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.776192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.776209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.776489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.776507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.776722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.776740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.776855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.776872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.777878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.777987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.778954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.778972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.779166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.779184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.779383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.779402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.779591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.779788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.779807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.779908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.779924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.780187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.780434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.780452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.780711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.780729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.780898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.780917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.781029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.120 [2024-07-15 12:59:48.781046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.120 qpair failed and we were unable to recover it. 00:29:57.120 [2024-07-15 12:59:48.781225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.781243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.781358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.781376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.781536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.781554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.781749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.781767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.781867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.781885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.782069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.782086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.782354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.782373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.782473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.782489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.782660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.782678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.782909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.782927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.783247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.783281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.783512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.783533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.783694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.783712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.783819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.783837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.784070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.784088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.784195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.784211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.784380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.784399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.784590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.784607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.784787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.784805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.785059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.785077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.785332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.785351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.785579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.785597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.785760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.785778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.785911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.785928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.786103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.786121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.786367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.786386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.786635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.786653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.786827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.786845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.787939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.787956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.121 [2024-07-15 12:59:48.788854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.121 [2024-07-15 12:59:48.788872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.121 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.788978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.788997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.789177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.789195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.789359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.789377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.789497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.789514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.789607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.789623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.789885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.789903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.790169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.790187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.790314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.790333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.790441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.790459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.790636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.790653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.790758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.790778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.791032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.791050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.791212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.791229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.791409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.791428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.791664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.791681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.791772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.791789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.792866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.792884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.793865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.793883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.794968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.794985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.795178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.795196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.795432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.795451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.795547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.795565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.795830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.795848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.795956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.795974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.122 [2024-07-15 12:59:48.796225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.122 [2024-07-15 12:59:48.796243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.122 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.796367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.796385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.796645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.796663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.796842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.796860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.796958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.796976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.797149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.797168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.797398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.797417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.797508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.797526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.797779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.797809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.798015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.798045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.798232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.798287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.798410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.798440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.798631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.798660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.798871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.798901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.799156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.799185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.799460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.799491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.799739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.799758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.799932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.799950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.800938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.800956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.801191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.801209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.801368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.801386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.801628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.801646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.801848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.801866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.802054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.802195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.802495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.802621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.802844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.802989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.803007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.803201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.803230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.803358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.123 [2024-07-15 12:59:48.803388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.123 qpair failed and we were unable to recover it. 00:29:57.123 [2024-07-15 12:59:48.803644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.803674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.803961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.804031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.804336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.804373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.804608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.804640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.804796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.804826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.804949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.804979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.805980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.805998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.806872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.806889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.807945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.807963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.808141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.808159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.808326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.808345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.808513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.808530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.808766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.808783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.808957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.808975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.809145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.809163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.809402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.809421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.809642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.809660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.809836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.809854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.810033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.810051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.810360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.810391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.810506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.810535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.810738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.810768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.810912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.810942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.811128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.811159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.124 [2024-07-15 12:59:48.811304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.124 [2024-07-15 12:59:48.811335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.124 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.811531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.811570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.811693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.811711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.811946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.811964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.812157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.812175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.812286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.812304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.812465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.812484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.812646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.812664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.812831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.812849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.813103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.813290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.813309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.813479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.813508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.813756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.813786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.814005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.814034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.814166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.814197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.814412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.814444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.814631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.814672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.814923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.814941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.815052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.815069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.815232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.815251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.815417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.815457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.815605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.815634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.815773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.815802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.816026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.816055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.816206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.816235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.816499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.816530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.816672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.816702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.816825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.816854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.817047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.817077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.817331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.817363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.817555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.817585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.817717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.817746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.817933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.817962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.818924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.818941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.819132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.819151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.125 [2024-07-15 12:59:48.819383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.125 [2024-07-15 12:59:48.819401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.125 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.819504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.819525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.819770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.819798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.819929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.819959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.820159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.820189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.820325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.820355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.820569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.820599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.820910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.820939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.821123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.821153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.821395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.821415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.821578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.821596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.821688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.821706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.821827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.821846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.822051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.822080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.822288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.822319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.822529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.822559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.822684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.822714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.823050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.823079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.823204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.823221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.823457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.823476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.823645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.823663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.823837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.823855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.824977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.825209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.825239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.825504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.825534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.825664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.825693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.825943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.825973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.826172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.826202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.826435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.826466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.826667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.826697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.826891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.826921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.827126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.827156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.827297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.827316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.827434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.827452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.827706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.827724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.827838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.827856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.126 [2024-07-15 12:59:48.828143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.126 [2024-07-15 12:59:48.828177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.126 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.828375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.828405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.828590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.828620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.828871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.828890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.828999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.829978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.829996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.830198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.830216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.830335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.830354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.830533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.830552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.830784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.830802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.830973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.831002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.831237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.831275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.831424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.831442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.831658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.831687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.831821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.831849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.832041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.832071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.832235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.832273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.832467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.832497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.832703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.832732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.832950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.832979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.833127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.833290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.833600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.833643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.833878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.833896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.833989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.834180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.834307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.834505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.834702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.834886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.834904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.835034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.127 [2024-07-15 12:59:48.835063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.127 qpair failed and we were unable to recover it. 00:29:57.127 [2024-07-15 12:59:48.835217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.835247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.835530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.835560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.835766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.835785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.835947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.835965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.836152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.836187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.836384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.836605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.836635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.836844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.836862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.837887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.837905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.838077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.838095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.838268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.838310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.838592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.838623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.838844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.838874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.839002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.839031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.839224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.839271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.839583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.839613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.839834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.839864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.839989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.840180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.840347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.840528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.840731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.840911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.840929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.841830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.841848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.128 [2024-07-15 12:59:48.842849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.128 [2024-07-15 12:59:48.842867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.128 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.843094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.843164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.843393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.843429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.843627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.843658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.843796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.843835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.843952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.843982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.844273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.844304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.844438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.844469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.844658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.844690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.844821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.844850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.845055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.845085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.845294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.845325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.845579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.845609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.845742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.845760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.845925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.845943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.846090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.846129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.846400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.846430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.846578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.846608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.846752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.846782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.846980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.847948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.847965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.848142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.848160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.848404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.848423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.848587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.848605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.848777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.848795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.849036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.849066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.849200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.849484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.849514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.849640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.849658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.849846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.850067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.850085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.850264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.850283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.850453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.850471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.850705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.850735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.850919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.850949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.851141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.851170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.129 [2024-07-15 12:59:48.851428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.129 [2024-07-15 12:59:48.851458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.129 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.851758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.851788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.851995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.852024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.852227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.852266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.852459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.852477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.852606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.852641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.852766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.852795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.852977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.853267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.853490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.853651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.853774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.853949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.853967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.854075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.854093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.854217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.854235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.854427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.854446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.854629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.854648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.854833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.854851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.855937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.855955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.856822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.856843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.857898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.857915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.858020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.858038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.858198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.858216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.858389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.858407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.858518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.858534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.130 qpair failed and we were unable to recover it. 00:29:57.130 [2024-07-15 12:59:48.858647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.130 [2024-07-15 12:59:48.858665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.858830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.858848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.859980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.859998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.860164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.860182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.860440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.860459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.860568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.860585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.860774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.860792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.860903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.860921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.861900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.861918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.862881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.862899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.863891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.863911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.864029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.864149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.864400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.864599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.131 [2024-07-15 12:59:48.864713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.131 qpair failed and we were unable to recover it. 00:29:57.131 [2024-07-15 12:59:48.864876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.864893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.865068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.865086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.865184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.865201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.865462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.865481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.865707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.865725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.865921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.865939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.866958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.866976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.867157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.867382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.867514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.867636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.867881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.867996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.868818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.868983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.869806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.869824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.870067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.870085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.870267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.870285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.870394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.870413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.870513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.870530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.132 qpair failed and we were unable to recover it. 00:29:57.132 [2024-07-15 12:59:48.870640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.132 [2024-07-15 12:59:48.870658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.870816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.870833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.870928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.870946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.871965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.871981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.872972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.872990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.873913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.873931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.874956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.874974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.875894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.875911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.876108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.876308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.876428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-15 12:59:48.876751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.133 qpair failed and we were unable to recover it. 00:29:57.133 [2024-07-15 12:59:48.876847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.876865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.876963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.876981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.877169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.877187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.877370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.877388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.877496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.877514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.877692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.877710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.877812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.877830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.878924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.878943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.879855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.879873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.880961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.880978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.881890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.881908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-15 12:59:48.882876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.134 qpair failed and we were unable to recover it. 00:29:57.134 [2024-07-15 12:59:48.882969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.883026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5ce60 (9): Bad file descriptor 00:29:57.135 [2024-07-15 12:59:48.883284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.883355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.883661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.883694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.883904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.883935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.884075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.884114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.884308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.884340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.884537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.884558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.884678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.884695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.884929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.884947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.885810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.885827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.886856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.886872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.887937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.887955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.888114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.888131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.888249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.888509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.888528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.888711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.888728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.888920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.888938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.889039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.889057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.889287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.889306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.889403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.889423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.889524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.889542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.889785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.889803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.890008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.890026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.890193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-15 12:59:48.890211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.135 qpair failed and we were unable to recover it. 00:29:57.135 [2024-07-15 12:59:48.890497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.890516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.890629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.890647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.890741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.890758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.890933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.890951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.891129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.891150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.891306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.891324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.891528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.891546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.891738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.891755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.891917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.891934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.892907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.892925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.893914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.893932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.894893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.894911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.895978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.895996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.136 [2024-07-15 12:59:48.896169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.136 [2024-07-15 12:59:48.896187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.136 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.896319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.896338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.896462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.896480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.896644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.896661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.896778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.896796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.896956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.896974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.897137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.897155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.897383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.897402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.897606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.897624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.897814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.897832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.897940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.897961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.898839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.898857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.899161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.899351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.899457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.899633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.899826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.899992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.900954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.900971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.901922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.901938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.902950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.137 qpair failed and we were unable to recover it. 00:29:57.137 [2024-07-15 12:59:48.903108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.137 [2024-07-15 12:59:48.903126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.903230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.903247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.903347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.903364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.903561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.903579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.903811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.903828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.903990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.904007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.904155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.904173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.904496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.904514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.904745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.904763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.904851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.904872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.905959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.905977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.906265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.906283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.906388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.906406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.906498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.906515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.906746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.906763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.906942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.906959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.907193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.907211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.907315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.907334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.907571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.907588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.907748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.907766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.908933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.908951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.909191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.909209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.909319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.909338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.909492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.909509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.909767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.909784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.909892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.909910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.910086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.910104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.910381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.910399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.910556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.910574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.138 [2024-07-15 12:59:48.910689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.138 qpair failed and we were unable to recover it. 00:29:57.138 [2024-07-15 12:59:48.910795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.910813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.911011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.911028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.911192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.911210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.911392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.911410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.911667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.911685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.911932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.911950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.912936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.912953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.913872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.913890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.914977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.914995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.915233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.915251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.915458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.915647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.915665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.915837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.915854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.915954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.915973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.916267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.916285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.916376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.916393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.916492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.916510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.916739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.916756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.917021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.917039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.917151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.917169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.917330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.917349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.917446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.917463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.917736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.917754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.918000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.139 [2024-07-15 12:59:48.918018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.139 qpair failed and we were unable to recover it. 00:29:57.139 [2024-07-15 12:59:48.918140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.918247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.918356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.918604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.918802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.918934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.918952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.919148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.919167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.919328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.919350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.919524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.919543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.919720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.919738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.919898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.919916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.920017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.920035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.920218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.920236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.920472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.920542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.920869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.920901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.921161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.921191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.921420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.921451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.921588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.921616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.921872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.921901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.922104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.922124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.922229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.922246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.922458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.922476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.922650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.922668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.922848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.922866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.923042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.923059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.923182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.923474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.923493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.923662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.923680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.923774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.923792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.924041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.924239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.924495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.924614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.924793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.924997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.925029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.140 [2024-07-15 12:59:48.925229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.140 [2024-07-15 12:59:48.925268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.140 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.925488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.925518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.925661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.925680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.925794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.925812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.926045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.926063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.926280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.926299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.926474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.926493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.926746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.926763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.927023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.927041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.927299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.927317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.927429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.927446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.927655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.927673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.927866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.927887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.928861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.928879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.929068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.929336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.929513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.929624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.929750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.929989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.930125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.930383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.930654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.930769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.930905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.930923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.931017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.931210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.931228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.931406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.931425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.931684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.931701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.931931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.931948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.932147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.932341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.932521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.932633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.932880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.932999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.141 [2024-07-15 12:59:48.933028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-07-15 12:59:48.933232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.933273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.933423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.933452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.933676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.933706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.933854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.933884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.934905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.934922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.935103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.935125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.935289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.935308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.935488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.935507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.935665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.935867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.935885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.936819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.936837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.937128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.937145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.937391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.937409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.937523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.937541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.937634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.937652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.937867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.937885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.938158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.938175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.938280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.938298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.938581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.938599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.938727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.938744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.938859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.938877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.939137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.939156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.939347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.939366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.939541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.939571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.939702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.939732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.939917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.939947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.940094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.940123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.940313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.940345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.940531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.940560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-07-15 12:59:48.940702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-07-15 12:59:48.940731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.940983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.941001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.941200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.941218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.941451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.941470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.941646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.941903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.941932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.942131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.942160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.942365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.942397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.942582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.942600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.942783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.942813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.943090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.943120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.943384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.943421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.943624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.943641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.943745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.943763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.943935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.943953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.944042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.944060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.944289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.944309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.944529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.944547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.944708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.944725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.945004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.945033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.945148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.945178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.945369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.945400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.945613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.945643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.945858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.945887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.946119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.946149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.946315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.946346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.946531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.946561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.946811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.946828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.947109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.947126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.947430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.947449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.947562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.947580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.947761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.947780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.948000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.948019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.948192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.948229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.948384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.948415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.948612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.948641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.948770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.948799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.949021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.949038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.949142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.949160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.949269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.949538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-07-15 12:59:48.949652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-07-15 12:59:48.949671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.949853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.949870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.950041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.950059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.950175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.950193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.950369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.950388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.950569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.950587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.950800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.950817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.951046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.951064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.951332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.951350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.951464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.951482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.951589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.951610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.951784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.951801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.952929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.952947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.953056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.953074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.953275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.953293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.953465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.953483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.953683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.953712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.953994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.954139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.954308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.954471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.954715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.954950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.954979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.955118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.955147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.955451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.955482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.955765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.955795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.955941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.955971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.956825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.956843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.957029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.957047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-07-15 12:59:48.957294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-07-15 12:59:48.957312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.957477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.957495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.957658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.957676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.957805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.957839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.958127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.958156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.958286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.958316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.958632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.958671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.958864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.958881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.959061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.959078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.959177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.959195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.959386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.959406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.959589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.959618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.959900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.959930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.960124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.960154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.960358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.960389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.960579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.960609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.960839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.960868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.961123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.961397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.961427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.961643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.961672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.961964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.961994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.962215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.962246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.962457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.962686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.962726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.962839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.962856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.963957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.963986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.964276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.964306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.964429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.964459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.964659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.964688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.964943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.964972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-07-15 12:59:48.965102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-07-15 12:59:48.965120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.965304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.965427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.965445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.965643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.965692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.965891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.965920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.966119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.966148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.966405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.966435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.966689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.966719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.966906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.966936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.967193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.967210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.967380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.967398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.967584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.967614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.967756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.967785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.967913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.967942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.968277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.968308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.968501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.968531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.968656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.968685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.968878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.968907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.969084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.969113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.969249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.969289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.969504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.969533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.969679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.969696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.969875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.969893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.970072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.970089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.970375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.970406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.970592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.970622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.970771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.970800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.971915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.971933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.972106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.972125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.972355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.972373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.972537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.972804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.972833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.972965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.972995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.973126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.973156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.973343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.973373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.973602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.973632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.973918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.973936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.974137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-07-15 12:59:48.974155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-07-15 12:59:48.974278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.974300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.974491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.974509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.974621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.974639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.974746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.974764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.974934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.974952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.975876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.975894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.976836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.976869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.977178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.977207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.977448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.977478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.977788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.977818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.978101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.978131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.978317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.978348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.978533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.978563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.978826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.978855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.979078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.979107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.979393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.979423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.979658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.979687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.979842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.979860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.979959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.979977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.980180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.980198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.980457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.980476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.980648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.980665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.980895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.980932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.981120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.981149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.981288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.981319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.981468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.981498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.981753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.981783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.981921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.981939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.982194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.982212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.982384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.982402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.982512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.982533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.982788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.982805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.982928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.982946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.983197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-07-15 12:59:48.983215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-07-15 12:59:48.983435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.983454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.983559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.983576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.983784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.983801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.983953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.983971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.984978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.984996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.985170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.985199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.985510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.985541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.985730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.985760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.986012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.986042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.986244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.986284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.986550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.986579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.986861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.986880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.986999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.987253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.987276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.987398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.987415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.987506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.987523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.987806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.987824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.987996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.988014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.988283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.988302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.988556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.988575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.988843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.988861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.989047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.989065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.989176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.989194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.989404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.989423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.989625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.989880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.989910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.990167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.990196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.990394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.990424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.990700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.990718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.990869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.990886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.991140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.991158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.991356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.991377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.991573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.991591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.991878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.991908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.992110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.992139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.992339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.992369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.992500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.992529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.992730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.992760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.992896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-07-15 12:59:48.992914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-07-15 12:59:48.993143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.993160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.993462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.993481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.993650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.993680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.993891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.993921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.994061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.994294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.994325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.994537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.994567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.994758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.994787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.994989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.995144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.995274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.995455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.995649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.995927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.995957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.996157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.996187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.996338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.996369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.996559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.996588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.996773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.996803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.997063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.997092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.997403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.997434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.997691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.997721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.998002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.998031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.998245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.998268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.998390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.998408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.998665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.998694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.998896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.998926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.999122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.999152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.999293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.999322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.999634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.999664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:48.999864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:48.999904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.000085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.000103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.000300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.000330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.000533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.000569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.000801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.000831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.001030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.001059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.001351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.001383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.001643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.001673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.001857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.001887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.002106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-07-15 12:59:49.002136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-07-15 12:59:49.002329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.002359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.002490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.002520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.002800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.002829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.003118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.003147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.003405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.003435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.003624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.003653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.003860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.003878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.004867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.004884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.005048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.005066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.005223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.005240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.005352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.005371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.005562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.005580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.005834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.005852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.006030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.006048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.006229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.006267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.006480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.006509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.006698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.006727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.007014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.007043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.007320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.007338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.007600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.007619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.007804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.007821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.007978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.007996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.008194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.008212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.008381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.008400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.008595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.008612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.008788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.008806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.009875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.009893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.010940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.010958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-07-15 12:59:49.011160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-07-15 12:59:49.011189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.011418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.011450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.011677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.011707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.011917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.011935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.012120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.012138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.012245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.012280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.012404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.012421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.012581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.012598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.012857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.012875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.013084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.013101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.013206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.013223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.013533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.013564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.013854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.013884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.014163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.014180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.014436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.014455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.014679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.014697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.014958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.014976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.015160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.015177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.015363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.015382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.015478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.015668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.015686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.015847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.015865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.016109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.016138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.016405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.016437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.016749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.016779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.017048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.017079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.017384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.017414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.017644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.017673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.017876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.017894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.018024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.018205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.018352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.018663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.018875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.018983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.019000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.019282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.019301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.019626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.019670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.019927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.019957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.020156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.020185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.020460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.020737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.020755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.021020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.021038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.021224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.021241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.021532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.021859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.021877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.022079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.022114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.022345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-07-15 12:59:49.022376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-07-15 12:59:49.022655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.022685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.022897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.022927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.023157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.023175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.023352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.023372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.023635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.023665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.023913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.023932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.024093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.024111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.024324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.024355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.024564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.024594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.024810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.024840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.025107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.025138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.025406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.025436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.025749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.025779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.026002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.026032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.026334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.026365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.026651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.026681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.026934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.026952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.027181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.027199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.027360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.027379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.027690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.027720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.027944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.027974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.028279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.028299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.028502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.028521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.028722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.028743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.028922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.028941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.029155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.029184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.029354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.029575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.029605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.029828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.029858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.030040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.030058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.030261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.030281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.030541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.030558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.030802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.030820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.031107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.031125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.031237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.031262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.031471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.031489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.031681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.031699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.031908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.031926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.032201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.032219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.032424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.032443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-07-15 12:59:49.032625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-07-15 12:59:49.032658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.032887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.032917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.033199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.033230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.033399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.033430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.033687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.033717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.033989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.034006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.034239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.034273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.034514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.034532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.034784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.034803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.034991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.035009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.035210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.035228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.035429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.035448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.035678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.035696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.035952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.035982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.036228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.036269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.036573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.036603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.036877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.036907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.037135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.037165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.037397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.430 [2024-07-15 12:59:49.037429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.430 qpair failed and we were unable to recover it. 00:29:57.430 [2024-07-15 12:59:49.037657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.037687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.037917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.037935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.038180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.038198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.038374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.038393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.038573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.038608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.038840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.038869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.039127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.039157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.039437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.039456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.039713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.039732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.039986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.040004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.040271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.040290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.040524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.040542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.040800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.040818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.040996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.041014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.041308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.041340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.041597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.041627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.041931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.041949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.042112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.042130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.042368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.042388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.042581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.042611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.042899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.042928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.043125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.043154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.043300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.043331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.043571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.043601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.043804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.043822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.044019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.044049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.044332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.044363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.044660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.044690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.044898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.044928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.045213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.045242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.045539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.045569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.045859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.045888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.046176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.046216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.046405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.046424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.046699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.046948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.046966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.047250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.047274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.047522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.047540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.431 qpair failed and we were unable to recover it. 00:29:57.431 [2024-07-15 12:59:49.047776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-15 12:59:49.047794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.047969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.047987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.048079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.048097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.048356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.048374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.048538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.048556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.048867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.048897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.049207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.049241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.049515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.049545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.049804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.049835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.049974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.049992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.050278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.050310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.050572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.050602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.050794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.050823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.051109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.051138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.051428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.051447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.051734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.051751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.051942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.051981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.052253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.052292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.052586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.052616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.052901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.052932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.053208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.053436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.053467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.053750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.053779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.054087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.054116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.054396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.054717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.054746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.054994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.055037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.055287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.055306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.055475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.055493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.055682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.055700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.055934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.055952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.056232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.056250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.056375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.056692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.056723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.057033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.057064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.057271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.057290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.057531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.057781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.057799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.057982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.058000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.058281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.058313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.058590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-15 12:59:49.058620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.432 qpair failed and we were unable to recover it. 00:29:57.432 [2024-07-15 12:59:49.058905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.058934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.059232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.059270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.059571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.059601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.059808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.059837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.060120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.060151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.060339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.060381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.060524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.060541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.060800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.060830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.061043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.061356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.061387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.061521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.061551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.061750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.061779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.062064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.062094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.062292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.062323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.062606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.062637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.062955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.062984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.063275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.063306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.063600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.063934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.063963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.064253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.064295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.064523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.064553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.064772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.064802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.064932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.064962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.065241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.065282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.065514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.065543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.065802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.065831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.066094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.066123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.066402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.066433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.066745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.066776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.067065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.067094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.067331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.067362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.067623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.067653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.067970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.067988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.068264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.068283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.068521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.068539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.068826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.068844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.069076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.069094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.069311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.069330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.069496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.069515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.069752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.069782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.433 qpair failed and we were unable to recover it. 00:29:57.433 [2024-07-15 12:59:49.070102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.433 [2024-07-15 12:59:49.070131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.070349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.070380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.070667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.070697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.070990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.071019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.071309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.071339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.071595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.071630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.071909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.072194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.072223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.072541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.072572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.072835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.072864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.073150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.073179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.073381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.073412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.073723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.073753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.073956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.073986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.074245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.074285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.074496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.074526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.074780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.074810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.075017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.075035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.075275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.075294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.075469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.075487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.075723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.075741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.076008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.076352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.076384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.076645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.076675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.076951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.076980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.077273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.077313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.077492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.077510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.077775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.077793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.078084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.078102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-07-15 12:59:49.078285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.434 [2024-07-15 12:59:49.078304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.078471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.078489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.078664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.078681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.078869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.078888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.079122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.079140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.079339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.079358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.079528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.079546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.079733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.079762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.079967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.079997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.080222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.080250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.080573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.080603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.080731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.080760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.081045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.081063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.081328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.081346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.081579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.081597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.081840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.081858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.082043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.082061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.082346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.082377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.082643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.082673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.082881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.082911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.083162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.083179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.083437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.083456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.083626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.083644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.083925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.083955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.084300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.084330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.084621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.435 [2024-07-15 12:59:49.084650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-07-15 12:59:49.084860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.084889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.085096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.085126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.085354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.085385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.085575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.085604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.085871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.085901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.086194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.086213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.086407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.086426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.086689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.086707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.086970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.086988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.087269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.087289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.087468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.087486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.087750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.087768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.087933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.087951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.088161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.088178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.088372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.088406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.088696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.088726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.088851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.088880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.089091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.089127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.089437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.089459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.089729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.089747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.089840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.089858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.090082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.090112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.090331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.090362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.090644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.090674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.090969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.090999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.091207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.091237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.091530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.091562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.091773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.091804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.092111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.092140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.092344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.092363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.092605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.092635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.092846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.092876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.093151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.093449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.093480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.093799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.093829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.094110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.094128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.094312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.094331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.094593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-07-15 12:59:49.094611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-07-15 12:59:49.094847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.094864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.095966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.095985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.096250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.096276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.096545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.096564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.096781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.096799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.096963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.096981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.097178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.097196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.097464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.097483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.097655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.097673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.097874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.097904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.098190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.098220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.098492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.098523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.098826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.098856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.099144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.099173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.099386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.099422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.099685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.099715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.099961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.099979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.100226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.100244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.100519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.100538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.100705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.100723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.100864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.100882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.101081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.101111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.101423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.101455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.101747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.101777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.102072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.102102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.102392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.102411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.102601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.102619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.102812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.102830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-07-15 12:59:49.103129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-07-15 12:59:49.103159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.103418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.103449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.103714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.103744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.104064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.104082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.104361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.104380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.104643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.104677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.104963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.104993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.105180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.105210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.105500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.105519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.105812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.105830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.106036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.106054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.106300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.106319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.106505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.106523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.106704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.106722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.106991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.107021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.107286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.107317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.107621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.107650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.107938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.107967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.108270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.108311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.108430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.108618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.108636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.108756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.108774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.109030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.109048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.109305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.109323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.109574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.109861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.109880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.110059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.110081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.110320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.110340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.110600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.110618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.110797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-07-15 12:59:49.110815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-07-15 12:59:49.111086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.111104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.111291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.111310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.111578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.111596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.111763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.111976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.111994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.112270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.112288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.112504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.112522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.112699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.112716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.113018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.113036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.113308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.113327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.113500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.113518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.113729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.113760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.114052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.114081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.114367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.114399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.114603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.114634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.114837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.114866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.115017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.115035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.115307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.115339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.115617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.115851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.115900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.116186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.116216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.116456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.116487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.116765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.116796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.117109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.117138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.117336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.117368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.117582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.117612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.117895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.117924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.118076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.118094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.118278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.118297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.118568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.118587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.118823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-07-15 12:59:49.118841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-07-15 12:59:49.118967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.118986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.119232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.119250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.119538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.119556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.119807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.119826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.119947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.119965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.120086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.120108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.120284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.120549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.120567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.120826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.120856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.121155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.121185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.121474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.121493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.121626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.121644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.121932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.121950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.122269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.122289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.122469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.122488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.122683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.122712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.122953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.122982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.123206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.123236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.123434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.123453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.123680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.123709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.124001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.124030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.124275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.124307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.124518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.124536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-07-15 12:59:49.124806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-07-15 12:59:49.124824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.125093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.125111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.125324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.125343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.125525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.125544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.125837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.125856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.126180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.126199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.126472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.126491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.126716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.126734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.127012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.127030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.127215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.127233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.127432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.127452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.127658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.127688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.127988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.128017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.128289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.128321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.128644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.128674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.128946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.128976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.129299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.129319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.129527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.129546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.129814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.129833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.130079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.130097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.130358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.130377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.130645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.130664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.130902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.130923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.131170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.131188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.131302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.131321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.131566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.131584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.131857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.132220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.132250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.132558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.132589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.132871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-07-15 12:59:49.132901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-07-15 12:59:49.133051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.133081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.133379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.133410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.133655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.133674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.133895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.133913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.134096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.134114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.134333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.134365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.134665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.134696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.135034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.135064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.135293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.135324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.135493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.135523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.135736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.135766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.136066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.136084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.136328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.136347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.136644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.136663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.136934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.136952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.137182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.137200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.137317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.137337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.137586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.137604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.137869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.137887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.137994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.138012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.138287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.138307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.138492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.138522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.138810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.138840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.139058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.139088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.139412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.139443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.139640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.139659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.139855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.139873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.140177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.140195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.140370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.140389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.140640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.140659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.140831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.140850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.140975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.140994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.141093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.141114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-07-15 12:59:49.141287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-07-15 12:59:49.141306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.141479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.141514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.141790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.141821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.142020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.142049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.142389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.142421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.142642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.142661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.142925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.142942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.143133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.143151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.143451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.143471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.143670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.143688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.143889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.143919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.144143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.144172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.144457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.144489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.144769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.144799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.145080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.145110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.145422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.145454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.145651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.145669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.145845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.145889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.146166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.146197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.146458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.146477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.146679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.146697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.146957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.146976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.147225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.147244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.147550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.147568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.147739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.147757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.148046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.148263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.148283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.148555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.148574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-07-15 12:59:49.148714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-07-15 12:59:49.148732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.149078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.149096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.149340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.149360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.149578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.149597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.149873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.149892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.150153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.150171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.150433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.150453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.150724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.150742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.151031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.151050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.151275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.151295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.151577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.151596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.151857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.151879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.152151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.152169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.152417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.152449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.152669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.152700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.153046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.153076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.153282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.153314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.153609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.153640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.153929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.153969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.154222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.154240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.154452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.154471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.154749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.154768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.155038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.155056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.155165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.155183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.155405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.155790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.155820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.156124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.156154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.156447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.156478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.156707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.156737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.157042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-07-15 12:59:49.157072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-07-15 12:59:49.157365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.157396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.157690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.157721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.158019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.158050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.158369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.158389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.158635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.158654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.158875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.158906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.159220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.159249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.159471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.159490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.159669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.159687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.159914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.159944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.160226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.160267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.160480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.160500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.160764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.160795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.160933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.160962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.161223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.161265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.161550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.161581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.161856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.161886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.162051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.162082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.162431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.162463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.162681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.162711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.163014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.163045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.163241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.163529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.163547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.163752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.163771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.163960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.163978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.164168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.164186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.164366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.164385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.164706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.164725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.164980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.164999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.165283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.165302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.165521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.165540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.165796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.165815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.166036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.166054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.166300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.166320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-07-15 12:59:49.166593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-07-15 12:59:49.166612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.166809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.166827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.167074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.167092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.167396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.167416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.167662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.167681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.167882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.168118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.168136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.168282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.168301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.168584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.168602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.168807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.168826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.169072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.169090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.169358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.169377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.169623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.169642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.169779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.169798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.169916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.169935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.170110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.170129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.170324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.170343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.170517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.170535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.170838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.170856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.171166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.171195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.171528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.171558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.171819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.171849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.172071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.172102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-07-15 12:59:49.172424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-07-15 12:59:49.172455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.172778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.172796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.173022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.173041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.173177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.173196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.173381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.173404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.173674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.173693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.173971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.173989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.174236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.174262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.174440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.174459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.174642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.174672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.174970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.175000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.175211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.175241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.175548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.175567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.175705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.175724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.175970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.175988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.176201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.176220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.176401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.176420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.176619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.176637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.176834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.176853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.176983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.177002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.177269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.177289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.177501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.177520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.177792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.177811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.178112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.178130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.178375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.178395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.178693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.178712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.178962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.178980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.179240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.179521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-07-15 12:59:49.179540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-07-15 12:59:49.179734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.179753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.180005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.180325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.180345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.180639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.180658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.180950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.180968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.181170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.181188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.181437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.181456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.181704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.181962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.181981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.182252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.182488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.182507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.182767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.182786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.183000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.183019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.183281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.183301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.183492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.183510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.183612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.183635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.183817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.183836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.184101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.184120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.184313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.184334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.184542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.184560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.184784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.184814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.185098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.185128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.185360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.185391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.185710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.185728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.185998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.186017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.186248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.186274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.186475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.186752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.186771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.187100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.187129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.187415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.187447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.187751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.187781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.187942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.187972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.188169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.188199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.188524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.188815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.188857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.189112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.189130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.189377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-07-15 12:59:49.189397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-07-15 12:59:49.189524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.189543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.189761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.189780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.190055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.190365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.190384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.190632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.190651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.190914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.190932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.191129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.191148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.191450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.191469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.191750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.191768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.191988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.192006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.192252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.192280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.192523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.192542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.192801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.193040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.193059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.193235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.193253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.193481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.193511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.193750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.193781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.194077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.194107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.194324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.194362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.194559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.194578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.194774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.194792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.195038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.195079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.195311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.195342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.195643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.195672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.195973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.196002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.196292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.196324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.196595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.196639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.196816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.196834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.197132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.197151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.197426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.197446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.197623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.197642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.197887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.197905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.198199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.198217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.198413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.198433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.198680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.198811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.198830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.199106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.199125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-07-15 12:59:49.199381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-07-15 12:59:49.199413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.199691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.199722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.199942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.199972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.200294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.200326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.200602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.200620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.200930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.200948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.201128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.201146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.201423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.201442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.201627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.201646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.201842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.201860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.202078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.202115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.202279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.202311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.202591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.202621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.202922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.202941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.203236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.203263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.203481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.203500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.203743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.203762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.203933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.203952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.204160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.204488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.204508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.204621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.204639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.204830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.204852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.205132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.205150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.205398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.205417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.205663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.205681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.205944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.205963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.206211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.206229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.206474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.206493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.206759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.206777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.206909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.206927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.207126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-07-15 12:59:49.207144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-07-15 12:59:49.207423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.207781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.207812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.208106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.208136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.208350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.208382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.208591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.208609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.208856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.208875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.209151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.209181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.209406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.209426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.209547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.209566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.209750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.209769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.210062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.210081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.210188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.210206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.210470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.210489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.210733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.210751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.210924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.210942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.211214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.211233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.211486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.211505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.211767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.211785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.211977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.211996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.212201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.212220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.212476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.212495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.212782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.212800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.212994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-07-15 12:59:49.213013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-07-15 12:59:49.213187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.213205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.213319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.213339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.213597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.213616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.213915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.213934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.214177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.214195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.214334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.214353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.214549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.214567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.214914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.214933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.215149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.215167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.215307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.215327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.215523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.215739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.215757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.215940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.215970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.216212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.216241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.216413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.216444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.216729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.216747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.216940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.216958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.217133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.217152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.217400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.217419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.217737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.217767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.218057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.218087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.218402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.218434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.218723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.218754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.219002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.219032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.219362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.219598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.219629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.219821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.219840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.220028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.220047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.220167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.220186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.220442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.220461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-07-15 12:59:49.220727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-07-15 12:59:49.220924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.220942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.221075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.221093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.221364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.221384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.221629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.221651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.221828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.221847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.222125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.222143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.222396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.222416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.222639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.222657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.222869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.222888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.223132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.223150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.223459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.223478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.223802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.223820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.224113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.224132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.224380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.224399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.224666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.224685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.224914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.224932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.225106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.225125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.225410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.225429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.225698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.225717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.225934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.225953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.226162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.226181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.226390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.226409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.226625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.226643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.226822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.226840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.227026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.227056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.227357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.227388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.227602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.227632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.227931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.227950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.228247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.228273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.228576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-07-15 12:59:49.228606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-07-15 12:59:49.228910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.228942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.229228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.229269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.229562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.229581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.229872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.229890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.230110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.230128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.230387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.230406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.230593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.230612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.230902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.230920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.231199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.231217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.231508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.231527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.231713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.231731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.232003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.232022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.232296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.232315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.232580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.232602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.232808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.232827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.233108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.233126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.233387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.233406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.233617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.233635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.233907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.233926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.234128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.234147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.234427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.234712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.234730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.234926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.234945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.235225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.235243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.235534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.235553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.235825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.235843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.236013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.236032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.236208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.236227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.236460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.236480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-07-15 12:59:49.236755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-07-15 12:59:49.236774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.236963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.236982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.237190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.237209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.237481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.237524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.237836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.237866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.238154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.238183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.238477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.238510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.238807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.238825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.239113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.239131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.239327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.239346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.239622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.239842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.239861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.240105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.240124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.240299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.240319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.240459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.240512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.240834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.240864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.241179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.241209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.241486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.241506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.241609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.241628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.241873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.241891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.242163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.242182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.242427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.242447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.242725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.242744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.243002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.243020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.243294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.243534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.243552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.243810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.243828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.244099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.244117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.244298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.244317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-07-15 12:59:49.244564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-07-15 12:59:49.244583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.244900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.244918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.245163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.245182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.245377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.245396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.245699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.245730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.246033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.246063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.246356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.246388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.246691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.246920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.246950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.247096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.247126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.247452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.247483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.247753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.247771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.247991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.248010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.248183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.248201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.248474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.248493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.248710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.248740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.249039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.249070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.249368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.249399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.249619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.249648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.249842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.249872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.250169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.250199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.250519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.250539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.250826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.250845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.251088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.251106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.251280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.251300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.251634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.251664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.251962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.252291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.252323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.252521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.252561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.252813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.252832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.253081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.253099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-07-15 12:59:49.253336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-07-15 12:59:49.253355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.253629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.253647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.253967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.253986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.254275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.254295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.254580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.254601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.254852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.254870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.255146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.255165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.255442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.255462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.255591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.255610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.255819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.255849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.256090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.256120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.256445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.256485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.256778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.256797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.257099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.257129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.257279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.257298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.257546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.257576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.257859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.257889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.258203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.258232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.258529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.258561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.258808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.258826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.259099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.259118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.259365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.259384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.259659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.259677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.259862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 12:59:49.259881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 12:59:49.260162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.260192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.260469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.260500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.260798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.260817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.261107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.261127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.261271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.261290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.261555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.261574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.261851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.261870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.262153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.262184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.262499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.262530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.262751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.262770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.262963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.263232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.263273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.263475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.263494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.263718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.263736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.263927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.263946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.264138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.264268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.264288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.264642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.264672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.264990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.265022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.265238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.265282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.265503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.265526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.265781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.265800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.265996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.266015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.266188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.266207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.266464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.266496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.266709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.266740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.267064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.267096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.267333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.267365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.267646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 12:59:49.267676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 12:59:49.267924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.267954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.268266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.268298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.268580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.268609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.268822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.268852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.269139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.269169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.269350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.269370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.269546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.269581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.269850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.269880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.270078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.270109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.270410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.270442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.270609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.270639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.270978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.270997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.271212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.271230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.271493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.271512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.271762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.271780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.272009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.272027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.272281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.272301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.272497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.272516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.272643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.272663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.272785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.272805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.273014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.273033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.273206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.273225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.273459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.273478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.273755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.273773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.274062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.274081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.274326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.274347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.274539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.274559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.274749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.274768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.274984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.275003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.275250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.275277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.275525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.275544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.275822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.275843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.276128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.276147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 12:59:49.276372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 12:59:49.276392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.276646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.276664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.276924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.276943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.277139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.277157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.277436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.277456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.277673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.277693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.277901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.277919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.278240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.278282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.278496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.278526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.278707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.278726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.279009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.279040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.279336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.279369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.279616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.279636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.279939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.279958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.280252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.280279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.280399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.280417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.280610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.280640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.280932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.280962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.281270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.281302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.281522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.281552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.281770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.281800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.282041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.282071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.282285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.282316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.282597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.282616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.282739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.282758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.282980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.283010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.283228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.283272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.283476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.283507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.283721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.283739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.283959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.283978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 12:59:49.284251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 12:59:49.284279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.284533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.284551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.284744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.284763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.284974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.284993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.285295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.285327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.285470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.285501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.285798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.285828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.286045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.286077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.286201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.286237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.286425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.286457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.286764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.286795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.287021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.287040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.287319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.287339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.287464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.287482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.287756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.287775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.288046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.288065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.288251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.288291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.288539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.288557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.288804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.288823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.289088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.289107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.289317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.289337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.289509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.289529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.289796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.289826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.290131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.290161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.290461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.290493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.290782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.290801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.291071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.291090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.291216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.291234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.291477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.291508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.291804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 12:59:49.291835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 12:59:49.292132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.292162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.292418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.292450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.292676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.292706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.293005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.293024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.293237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.293264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.293446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.293587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.293606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.293794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.293812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.294018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.294286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.294318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.294620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.294650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.294942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.294961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.295231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.295249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.295465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.295484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.295732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.295751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.296003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.296021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.296284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.296304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.296516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.296535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.296673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.296695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.296895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.296914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.297152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.297171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.297289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.297308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.297504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.297523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.297695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.297714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.297944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.297964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.298163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.298182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.298353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.298372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.298586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.298615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.298768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.298799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.298916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 12:59:49.298947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 12:59:49.299165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.299194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.299405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.299425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.299682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.299702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.299810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.299829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.300106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.300124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.300372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.300418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.300718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.300748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.300945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.300975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.301195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.301225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.301432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.301463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.301758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.301777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.302021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.302039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.302353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.302373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.302621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.302640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.302760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.302957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.302976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.303224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.303242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.303533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.303552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.303669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.303688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.303916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.303934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.304216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.304246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.304409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.304440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.304667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.304698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.304918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.304948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.305218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.305248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 12:59:49.305430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 12:59:49.305449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.305657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.305676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.305919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.305939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.306155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.306177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.306388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.306407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.306596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.306615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.306841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.306871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.307002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.307032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.307304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.307336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.307659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.307689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.307990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.308021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.308233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.308272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.308435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.308465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.308790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.308821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.309028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.309046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.309262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.309283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.309500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.309519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.309735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.309754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.309933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.309963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.310193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.310222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.310469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.310522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.310712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.310730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.310918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.310937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.311127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.311146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.311272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.311292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.311507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.311526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.311718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.311737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.311926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.311945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.312133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.312152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.312426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.312445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.312704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.312723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.312913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 12:59:49.312931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 12:59:49.313138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.313157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.313425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.313445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.313638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.313657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.313799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.313817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.314008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.314037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.314240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.314281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.314587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.314605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.314799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.314818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.315005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.315023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.315253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.315294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.315487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.315518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.315768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.315804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.316015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.316045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.316252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.316296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.316565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.316583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.316787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.316806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.317052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.317070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.317243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.317269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.317458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.317477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.317730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.317748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.317952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.317971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.318164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.318183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.318372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.318392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.318629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.318648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.318856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.318886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.319211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.319447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.319482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.319782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.319812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.320020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.320039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.320250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.320276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.320488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.320507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.320680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 12:59:49.320716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 12:59:49.321007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.321036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.321343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.321374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.321644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.321674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.322001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.322030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.322290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.322321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.322454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.322485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.322791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.322822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.323049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.323068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.323337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.323357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.323600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.323619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.323754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.323773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.323989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.324008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.324269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.324289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.324507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.324526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.324706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.324724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.324932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.324962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.325201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.325230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.325383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.325415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.325627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.325656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.325903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.325938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.326148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.326479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.326509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.326805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.326836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.327168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.327200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.327558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.327590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.327882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.327901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.328197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.328215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.328461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.328481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.328760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.328779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.329021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.329039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.329281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.329300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.329564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.329583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.329822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.329840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.330125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.330144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.330404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.330424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.330599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.330618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.330805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.330824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.331082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.331112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.331350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.331381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.331673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.331692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.331897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 12:59:49.331915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 12:59:49.332189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.332208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.332474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.332493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.332754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.332773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.333035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.333054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.333227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.333245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.333505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.333525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.333876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.333895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.334180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.334210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.334528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.334560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.334777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.334806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.335141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.335171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.335315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.335346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.335619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.335649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.335869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.335888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.335988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.336006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.336225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.336243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.336502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.336522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.336724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.336742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.337049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.337084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.337356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.337387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.337610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.337628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.337873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.337892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.338010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.338028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.338307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.338326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.338530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.338549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.338744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.338763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.339013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.339044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.339347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.339379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.339618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.339648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.339981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.340000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.340202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.340220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.340517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.340536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.340716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.340734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.341000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.341017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.341137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.341155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.341430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.341450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.341728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.341757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.342049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.342079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.342407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.342438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.342671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.342700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 12:59:49.342954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 12:59:49.342985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.343268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.343300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.343570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.343600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.343827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.343845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.344068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.344088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.344363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.344383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4116767 Killed "${NVMF_APP[@]}" "$@" 00:29:57.468 [2024-07-15 12:59:49.344603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.344623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.344793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.344811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.345056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.345074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.468 [2024-07-15 12:59:49.345281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.345301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.345527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.345546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.468 [2024-07-15 12:59:49.345687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.345705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.468 [2024-07-15 12:59:49.345878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.345896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.346083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.346102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.468 [2024-07-15 12:59:49.346353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.346374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.468 [2024-07-15 12:59:49.346641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.346660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.346987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.347006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.347292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.347312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.347512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.347531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.347786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.347805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.347991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.348010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.348188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.348207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.348400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.348419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.348537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.348555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.348752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.348771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.349006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.349025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.349277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.349296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.349495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.349514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.349689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.349707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.349851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.349873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.350122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.350396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.350415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 12:59:49.350541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 12:59:49.350559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.350809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.350828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.351045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.351064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.351236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.351261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.351380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.351398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.351609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.351628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 12:59:49.351943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 12:59:49.351961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 12:59:49.352211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 12:59:49.352232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 12:59:49.352444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 12:59:49.352465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 12:59:49.352580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 12:59:49.352598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 12:59:49.352877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 12:59:49.352895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.353147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.353165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.353379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.353400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4117632 00:29:57.753 [2024-07-15 12:59:49.353582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.353600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4117632 00:29:57.753 [2024-07-15 12:59:49.353821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.353841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.753 [2024-07-15 12:59:49.354077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.354096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4117632 ']' 00:29:57.753 [2024-07-15 12:59:49.354269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.354289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.354468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.753 [2024-07-15 12:59:49.354684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.354703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.753 [2024-07-15 12:59:49.354982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.355001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.753 [2024-07-15 12:59:49.355189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.355212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.753 [2024-07-15 12:59:49.355400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.355419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 12:59:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.753 [2024-07-15 12:59:49.355605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.355625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.355872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.355891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.356109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.356127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.356260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.356280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.356532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.356550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.356763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.356782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.357073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.357093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.357342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.357362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.357556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.357575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.357852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.357872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.358188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.358206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.358519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.358539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.358710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.358730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.358922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.358943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.359214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.359233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.359509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.359528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.359717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.359735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.360022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.360041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.360300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.360320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.360517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.360535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.360784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.360803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.361107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.361125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.361340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.361360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.361550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.361569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 12:59:49.361758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 12:59:49.361780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.361980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.361998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.362246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.362273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.362599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.362619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.362829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.363056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.363076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.363199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.363216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.363577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.363596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.363846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.363864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.364070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.364088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.364347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.364366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.364544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.364563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.364750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.364768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.364984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.365003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.365223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.365242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.365496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.365516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.365820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.365838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.365962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.365981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.366242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.366278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.366475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.366494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.366773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.366791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.367121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.367140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.367318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.367338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.367483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.367501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.367722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.367741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.367939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.367958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.368176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.368193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.368392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.368413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.368743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.368762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.368997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.369016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.369162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.369181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.369322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.369342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.369515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.369533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.369782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.369801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.370039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.370058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.370272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.370291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.370435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.370455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.370728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.370746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.370959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.370979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.371224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.371243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.371473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 12:59:49.371499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 12:59:49.371779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.371798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.372125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.372143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.372325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.372614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.372633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.372826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.372844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.373063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.373081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.373328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.373350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.373653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.373673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.373948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.373967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.374170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.374190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.374403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.374601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.374619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.374922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.374941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.375247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.375276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.375463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.375483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.375662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.375680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.375952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.375970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.376176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.376195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.376440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.376460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.376784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.376803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.377128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.377148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.377352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.377372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.377626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.377645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.377930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.377949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.378228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.378246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.378456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.378475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.378651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.378670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.378943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.378961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.379263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.379282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.379508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.379527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.379730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.379749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.379923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.379942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.380111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.380129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.380314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.380334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.380614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.380633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.380769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.380788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.380985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.381005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.381226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.381244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.381462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.381482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.381668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.381690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 12:59:49.381873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 12:59:49.381891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.382003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.382022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.382311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.382330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.382434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.382453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.382678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.382697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.382882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.382901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.383145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.383164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.383339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.383358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.383633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.383652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.383795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.383813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.384074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.384094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.384232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.384251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.384451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.384471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.384616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.384635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.384833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.384852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.385095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.385114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.385293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.385313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.385451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.385469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.385716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.385734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.385925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.385944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.386078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.386097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.386353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.386373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.386559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.386578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.386711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.386730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.386919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.386937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.387111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.387130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.387381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.387401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.387577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.387596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.387785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.387804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.387908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.387928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.388209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.388227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.388357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.388377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.388589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.388607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.388881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.388900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.389121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.389141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.389328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.389348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.389530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.389549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.389733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.389751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.389972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.389990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.390190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.390212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 12:59:49.390484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 12:59:49.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.390606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.390625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.390756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.390775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.390900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.390919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.391050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.391069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.391280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.391299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.391482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.391501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.391689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.391708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.391882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.391901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.392071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.392090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.392408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.392428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.392556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.392577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.392852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.392870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.393115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.393134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.393236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.393253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.393518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.393537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.393737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.393755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.393970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.393989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.394097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.394117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.394364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.394385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.394499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.394518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.394691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.394712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.394923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.394942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.395238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.395265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.395461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.395686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.395705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.395899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.395918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.396112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.396273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.396483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.396625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 12:59:49.396815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 12:59:49.396990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.397010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.397264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.397284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.397578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.397596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.397784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.397802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.398093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.398111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.398303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.398322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.398457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.398475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.398600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.398622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.398789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.398808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.399029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.399047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.399266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.399285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.399509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.399527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.399658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.399676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.399865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.399884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.400058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.400076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.400212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.400231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.400334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.400353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.400527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.400545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.400808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.400826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.401025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.401043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.401164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.401182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.401448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.401467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.401673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.401692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.401815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.401833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.402879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.402998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.403716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.403991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.404009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.404181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.404199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.404461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.404481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.404648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.404666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.404834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 12:59:49.404852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 12:59:49.405034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.405052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.405295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.405314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.405507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.405526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.405769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.405787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.406006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.406024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.406264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.406283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.406498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.406521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.406656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.406674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.406850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.406868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407579] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:29:57.759 [2024-07-15 12:59:49.407633] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.759 [2024-07-15 12:59:49.407647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.407847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.407864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.408910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.408927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.409170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.409188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.409304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.409322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.409563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.409581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.409769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.409788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.410048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.410270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.410289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.410556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.410574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.410672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.410689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.410801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.410820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.411111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.411129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.411246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.411273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.411507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.411526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.411727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.411746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.411938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.411957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.412198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.412216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.412409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.412428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.412561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.412580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.412747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.412766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 12:59:49.412984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 12:59:49.413002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.413179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.413197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.413328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.413347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.413546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.413564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.413808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.413826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.414117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.414135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.414248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.414281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.414414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.414433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.414620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.414638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.414953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.414971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.415087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.415105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.415351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.415370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.415623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.415641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.415821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.415838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.416891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.416909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.417110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.417128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.417296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.417315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.417495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.417513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.417626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.417644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.417897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.418094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.418112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.418295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.418314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.418586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.418604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.418815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.418834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.419016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.419033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.419215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.419233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.419429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.419448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.419742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.419760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.419976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.419994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.420249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.420276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.420493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.420511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.420704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.420722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.420833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.420851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.421047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.421066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.421330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 12:59:49.421349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 12:59:49.421532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.421550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.421791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.421808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.422886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.422904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.423911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.423929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.424125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.424143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.424309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.424328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.424583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.424601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.424838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.424856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.424976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.424994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.425177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.425195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.425321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.425341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.425507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.425525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.425707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.425725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.425895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.425913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.426104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.426122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.426302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.426321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.426532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.426550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.426811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.426829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.426997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.427142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.427381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.427515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.427705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.427965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 12:59:49.427983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 12:59:49.428150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.428296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.428314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.428565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.428583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.428767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.428786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.428964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.428983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.429182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.429200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.429310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.429329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.429528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.429546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.429733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.429752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.429964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.429982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.430117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.430135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.430301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.430320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.430414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.430434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.430616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.430633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.430849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.430868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.431946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.431964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.432131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.432149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.432274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.432292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.432552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.432571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.432738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.432756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.432937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.432956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.433121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.433139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.433327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.433466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.433484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.433592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.433610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.433789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.433806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 12:59:49.434915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 12:59:49.434933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.435168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.435185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.435384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.435404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.435538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.435556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.435736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.435754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.435919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.435937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.436979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.436996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.437205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.437224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.437409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.437428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.437602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.437620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.437789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.438930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.438948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.439062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.439081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.439266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.439555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.439574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.439757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.439774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.439932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.439967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.440148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.440166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.440337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.440356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.440473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.440491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.440758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.440776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.441033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.441051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.441292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.441311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.441421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.441439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.441643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.441662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.441868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.441887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.442051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.442069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.442234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.442252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.442512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.442530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.442709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.442727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 12:59:49.442889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 12:59:49.442908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.443121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.443139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.443349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.443368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.443551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.443569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.443805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.443822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.443998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.444016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.444264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.444283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.444470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.444489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.444691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.444709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.444991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.445931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.445952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.764 [2024-07-15 12:59:49.446118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.446136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.446320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.446339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.446535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.446752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.446770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.446998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.447270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.447524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.447648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.447767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.447910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.447928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.448904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.448924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.449107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.449124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.449369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.449390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.449556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.449573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.449689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 12:59:49.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 12:59:49.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.449904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.450025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.450043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.450221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.450239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.450425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.450495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4ed70 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.450757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.450827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.451021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d0000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.451325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.451346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.451536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.451555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.451724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.451742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.451852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.451870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.452033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.452051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.452248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.452272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.452452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.452469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.452730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.452748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.453041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.453059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.453174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.453192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.453450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.453468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.453669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.453687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.453814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.453832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.454067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.454089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.454272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.454290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.454486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.454504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.454600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.454617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.454913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.454931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.455854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.455872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.456080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.456098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.456207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.456225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.456429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.456638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.456657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.456937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.456956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.457087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.457105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 12:59:49.457269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 12:59:49.457288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.457524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.457543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.457725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.457743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.457844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.457862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.457973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.457991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.458172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.458191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.458352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.458371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.458468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.458485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.458743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.458995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.459184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.459310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.459453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.459634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.459859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.459876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.460042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.460060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.460232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.460423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.460441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.460705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.460723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.460898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.460916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.461175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.461193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.461408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.461427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.461684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.461702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.461886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.461908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.462082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.462336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.462355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.462621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.462639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.462873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.462891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.463066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.463084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.463268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.463469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.463487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 12:59:49.463681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 12:59:49.463700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.463941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.463959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.464067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.464085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.464207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.464225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.464363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.464381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.464670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.464688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.464943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.464962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.465065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.465083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.465273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.465292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.465531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.465549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.465748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.465766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.465932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.465950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.466125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.466243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.466279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.466540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.466558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.466794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.466813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.467009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.467026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.467282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.467301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.467500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.467519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.467701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.467719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.467954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.467972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.468158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.468176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.468363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.468382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.468649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.468666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.468784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.468801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.468920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.468938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.469056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.469074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.469261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.469279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.469515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.469534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.469627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.469645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.469819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.469836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.470890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.470908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.471077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.471095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.471190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 12:59:49.471209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 12:59:49.471318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.471529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.471547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.471753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.471771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.471892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.472148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.472166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.472328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.472348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.472605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.472623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.472807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.472825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.472992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.473010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.473192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.473210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.473391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.473410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.473669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.473687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.473867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.473886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.474080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.474098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.474275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.474294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.474471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.474489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.474673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.474692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.474974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.474992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.475091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.475109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.475315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.475334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.475626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.475644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.475878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.475896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.476980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.476998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.477171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.477189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.477353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.477371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.477532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.477550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.477714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.477731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.477893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.477911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.478170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.478190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.478432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.478452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.478571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.478589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.478684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.478702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.478886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.478904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.479012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.479029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.479268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.479286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.479519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 12:59:49.479538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 12:59:49.479720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.479739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.479982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.480000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.480244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.480267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.480447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.480465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.480729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.480747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.480983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.481001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.481127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.481145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.481403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.481421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.481605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.481623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.481879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.481897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.482041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.482227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.482422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.482613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.482834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.482991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.483170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.483369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.483555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.483742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.483947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.483965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.484215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.484351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.484575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.484773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.484889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.484990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.485007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.485211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.485229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.485399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.485418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.485690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.485708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.485880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.485898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.486119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.486136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.486298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.486320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.486503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.486521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.486727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.486745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.486871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.486889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.487141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.487160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.487333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.487352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.487456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.487473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.487703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.487721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 12:59:49.487885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 12:59:49.487903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.488158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.488176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.488359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.488378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.488639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.488657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.488831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.488849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.488973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.488991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.489286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.489305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.489496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.489515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.489694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.489712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.489894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.489913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.490014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.490031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.490294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.490312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.490511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.490686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.490704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.490989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.491007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.491126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.491144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.491421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.491440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.491640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.491659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.491913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.491931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.492850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.492869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.493032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.493050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.493288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.493307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.493539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.493557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.493745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.493763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.493921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.493938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.494129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.494147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.494277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.494296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.494413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.494435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.494692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.494710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.494919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.770 [2024-07-15 12:59:49.494936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.770 qpair failed and we were unable to recover it. 00:29:57.770 [2024-07-15 12:59:49.495165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.495469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.495488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.495706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.495725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.495996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.496194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.496371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.496565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.496756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.496866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.496883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.497124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.497143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.497322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.497341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.497600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.497617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.497797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.497815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.497991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.498132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.498305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.498571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.498694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.498836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.498854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.499892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.499910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.500164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.500182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.500349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.500368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.500530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.500548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.500707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.500726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.500971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.500989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.501248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.501272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.501398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.501416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.501586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.501604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.501898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.501916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.502132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.502150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.502382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.502401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.502592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.502626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.502800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.502821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.502981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.502999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.503158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.503176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.503374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.771 [2024-07-15 12:59:49.503393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.771 qpair failed and we were unable to recover it. 00:29:57.771 [2024-07-15 12:59:49.503643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.503661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.503837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.503855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.504157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.504175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.504355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.504373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.504578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.504596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.504759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.504777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.505053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.505188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.505328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.505473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.505728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.505988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.506006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.506192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.506211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.506440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.506459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.506691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.506709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.506905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.506923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.507093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.507111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.507271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.507291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.507476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.507494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.507586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.507605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.507847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.507865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.508096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.508114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.508286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.508304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.508469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.508488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.508652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.508670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.508787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.508805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.509932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.509951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.510057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.510075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.510305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.510324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.510529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.510547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.510710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.510728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.510830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.510851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.511093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.511111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.511310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.511330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.511422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.511616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.772 [2024-07-15 12:59:49.511634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.772 qpair failed and we were unable to recover it. 00:29:57.772 [2024-07-15 12:59:49.511901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.511919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.512011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.512028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.512282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.512301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.512561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.512580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.512746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.512764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.512942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.513134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.513152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.513315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.513335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.513592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.513610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.513789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.513807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.514965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.514983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.515954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.515972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.516086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.516104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.516269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.516287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.516456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.516474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.516594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.516612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.516808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.516826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.517019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.517037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.517340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.517360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.517541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.517559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.517719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.517737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.517967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.517985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.518096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.773 [2024-07-15 12:59:49.518115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.773 qpair failed and we were unable to recover it. 00:29:57.773 [2024-07-15 12:59:49.518216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.518234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.518502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.518521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.518705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.518725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.518943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.518961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.519960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.519978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.520144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.520162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.520413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.520432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.520619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.520638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.520810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.520993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.521012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.521129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.521147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.521348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.521469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.521487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.521719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.521738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.521999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.522140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.522279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.522541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.522662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.522886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.522904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.523955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.523973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.524167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.524375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.524503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.524700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.524879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.524993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.525011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.774 qpair failed and we were unable to recover it. 00:29:57.774 [2024-07-15 12:59:49.525171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.774 [2024-07-15 12:59:49.525190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.525376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.525395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.525574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.525593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.525774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.525793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.525955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.525973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.526180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.526202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.526363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.526382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.526512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.775 [2024-07-15 12:59:49.526563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.526581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.526758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.526776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.526892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.526909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.527116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.527134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.527306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.527325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.527585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.527603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.527771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.527790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.527952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.527970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.528094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.528112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.528217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.528235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.528403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.528422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.528547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.528570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.528745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.528764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.529826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.529844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.530018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.530036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.530235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.775 [2024-07-15 12:59:49.530253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.775 qpair failed and we were unable to recover it. 00:29:57.775 [2024-07-15 12:59:49.530541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.530559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.530766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.530784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.530889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.530907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.531979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.531998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.532179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.532196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.532412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.532431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.532596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.532614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.532854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.532873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.533853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.533872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.534047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.534065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.534296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.534316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.534600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.534799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.534818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.534924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.534943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.535223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.535242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.535457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.535476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.535640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.535659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.535830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.535848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.535948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.535966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.536151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.536169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.536349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.536374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.536535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.536553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.536717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.536735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.536903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.536921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.537106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.537124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.537290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.537309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.537502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.776 [2024-07-15 12:59:49.537521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.776 qpair failed and we were unable to recover it. 00:29:57.776 [2024-07-15 12:59:49.537725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.537743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.537998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.538131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.538329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.538472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.538723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.538947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.538965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.539063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.539081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.539339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.539359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.539478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.539496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.539717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.539736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.539984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.540167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.540366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.540559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.540828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.540974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.540992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.541246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.541270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.541544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.541563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.541820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.541838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.542005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.542027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.542283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.542302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.542525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.542543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.542704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.542722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.542844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.542862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.543041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.543060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.543233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.543252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.543418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.543436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.543607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.543625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.543855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.543873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.544034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.544052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.544319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.544339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.544596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.544614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.544878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.544896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.545026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.545044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.545157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.777 [2024-07-15 12:59:49.545175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.777 qpair failed and we were unable to recover it. 00:29:57.777 [2024-07-15 12:59:49.545413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.545432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.545604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.545622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.545790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.545809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.545915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.545933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.546152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.546171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.546418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.546437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.546583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.546601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.546862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.546880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.547002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.547021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.547199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.547217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.547382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.547400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.547609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.547627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.547813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.547831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.548011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.548029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.548287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.548307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.548541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.548559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.548737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.548756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.548928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.548946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.549129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.549148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.549312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.549331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.549560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.549578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.549758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.549776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.550034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.550052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.550226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.550245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.550465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.550488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.550748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.550766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.550953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.550971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.551225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.551243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.551434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.551453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.551576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.551594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.551886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.551904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.552019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.552037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.552143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.552161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.552321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.552339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.552451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.552471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.778 [2024-07-15 12:59:49.552568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.778 [2024-07-15 12:59:49.552586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.778 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.552694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.552712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.552966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.552984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.553169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.553187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.553347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.553366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.553620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.553638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.553825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.553843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.554119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.554247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.554376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.554571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.554821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.554985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.555215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.555233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.555371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.555390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.555557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.555761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.555780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.556050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.556068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.556275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.556294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.556580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.556598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.556765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.556783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.556947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.556965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.557163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.557302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.557438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.557632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.557762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.557993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.558012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.558207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.558226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.558425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.558447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.558562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.558581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.558841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.558859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.559028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.559046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.559166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.559185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.559426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.559686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.559704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.559823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.559841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.779 qpair failed and we were unable to recover it. 00:29:57.779 [2024-07-15 12:59:49.560019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.779 [2024-07-15 12:59:49.560037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.560271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.560290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.560469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.560487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.560743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.560761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.560866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.561070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.561088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.561354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.561373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.561563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.561581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.561747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.561766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.561949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.561967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.562146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.562164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.562275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.562294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.562484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.562503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.562622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.562640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.562870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.562888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.563058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.563076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.563305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.563324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.563503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.563521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.563704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.563722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.563963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.563981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.564098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.564116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.564290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.564309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.564517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.564720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.564738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.564932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.564950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.565062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.565080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.565264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.565282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.565487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.565506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.780 [2024-07-15 12:59:49.565711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.780 [2024-07-15 12:59:49.565729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.780 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.565903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.565921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.566967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.566985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.567152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.567170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.567332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.567351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.567572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.567590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.567681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.567699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.567956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.567974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.568098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.568117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.568347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.568368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.568532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.568550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.568734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.568753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.568981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.569129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.569268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.569455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.569735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.569866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.569884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.570141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.570159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.570282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.570301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.570476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.570494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.570685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.570703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.570946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.570965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.571236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.571260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.571500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.571519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.571680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.571698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.571895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.571912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.572111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.572129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.572303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.572322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.572484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.572503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.572686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.572704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.572831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.573105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.573123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.781 [2024-07-15 12:59:49.573296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.781 [2024-07-15 12:59:49.573316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.781 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.573519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.573538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.573769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.573788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.573895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.573913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.574089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.574107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.574205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.574223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.574423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.574445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.574715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.574734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.574901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.574919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.575152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.575171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.575269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.575288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.575518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.575536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.575641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.575659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.575920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.575938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.576185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.576204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.576314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.576333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.576455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.576473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.576632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.576651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.576808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.576825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.577008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.577027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.577296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.577315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.577440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.577458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.577718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.577736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.577947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.578880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.578898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.579155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.579174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.579282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.579301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.579405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.579424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.579631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.579650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.579827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.579845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.580080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.580098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.580204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.580222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.580411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.580430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.782 [2024-07-15 12:59:49.580590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.782 [2024-07-15 12:59:49.580609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.782 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.580786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.580805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.580909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.580926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.581122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.581139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.581370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.581389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.581525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.581785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.581803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.582009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.582027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.582154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.582174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.582391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.582410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.582638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.582656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.582914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.582932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.583137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.583156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.583333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.583352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.583473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.583491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.583651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.583669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.583833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.583852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.584915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.584934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.585925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.585943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.586197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.586215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.586378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.586397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.586496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.586514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.586711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.586729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.586888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.783 [2024-07-15 12:59:49.587140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.783 [2024-07-15 12:59:49.587158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.783 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.587344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.587362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.587545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.587563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.587732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.587749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.587928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.587945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.588201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.588219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.588397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.588416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.588603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.588621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.588712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.588729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.588961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.588979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.589151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.589170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.589284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.589303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.589562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.589580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.589829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.589847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.589974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.589995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.590158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.590176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.590341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.590359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.590616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.590634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.590867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.590885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.591881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.591898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.592153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.592170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.592263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.592281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.592440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.592458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.592702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.592720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.592912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.592929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.593091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.593109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.593288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.784 [2024-07-15 12:59:49.593307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.784 qpair failed and we were unable to recover it. 00:29:57.784 [2024-07-15 12:59:49.593493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.593511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.593740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.593758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.593921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.593939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.594121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.594139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.594241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.594267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.594449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.594468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.594700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.594717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.594902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.594920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.595081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.595098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.595279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.595299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.595407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.595427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.595589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.595610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.595858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.595877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.596060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.596081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.596292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.596313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.596517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.596538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.596702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.596722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.596835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.596853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.597059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.597079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.597279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.597300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.597415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.597435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.597537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.597556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.597792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.597817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.598968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.598986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.599188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.599207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.785 [2024-07-15 12:59:49.599383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.785 [2024-07-15 12:59:49.599403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.785 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.599663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.599683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.599925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.599944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.600062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.600240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.600366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.600565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.600822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.600983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.601962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.601981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.602109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.602126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.602310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.602328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.602491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.602509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.602704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.602722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.602901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.602919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.603182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.603200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.603379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.603398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.603570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.603588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.603698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.603716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.603970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.603988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.604169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.604187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.604448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.604467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.604637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.604655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.604840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.604858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.604977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.604995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.605176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.605327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.605510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.605688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.605897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.605988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.606006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.606239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.606264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.606448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.606466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.606678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.606696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.786 [2024-07-15 12:59:49.607013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.786 [2024-07-15 12:59:49.607032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.786 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.607280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.607561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.607580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.607812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.607830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.607949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.607967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.608132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.608150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.608417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.608436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.608733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.608751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.609060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.609078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.609400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.609589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.609606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.609778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.609796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.610854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.610871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.611138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.611155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.611393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.611413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.611664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.611682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.611860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.611878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.612065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.612083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.612349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.612368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.612542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.612560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.612848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.612866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.613122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.613140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.613283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.613301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.613567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.613585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.613693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.613711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.613876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.613893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.614005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.787 [2024-07-15 12:59:49.614023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.787 qpair failed and we were unable to recover it. 00:29:57.787 [2024-07-15 12:59:49.614301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.614322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.614488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.614506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.614705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.614723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.614903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.614922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.615175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.615193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.615462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.615481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.615719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.615737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.615897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.615915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.616175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.616193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.616501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.616519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.616699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.616717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.616880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.616898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.617146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.617164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.617356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.617375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.617564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.617583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.617860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.617878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.618127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.618144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.618378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.618396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.618654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.618672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.618870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.618887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.618993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.619011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.619170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.619187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.619445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.619464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.619640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.619658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.619949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.619967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.620158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.620176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.620441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.620459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.620638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.620657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.620945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.620963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.621124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.621331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.621350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.621627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.621645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.621835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.621853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.622124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.622142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.622373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.622391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.622638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.622655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.622919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.622937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.623199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.623217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.623458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.623477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.623710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.623728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.623901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.788 [2024-07-15 12:59:49.623923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.788 qpair failed and we were unable to recover it. 00:29:57.788 [2024-07-15 12:59:49.624179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.624196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.624312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.624331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.624493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.624510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.624676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.624694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.624857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.624875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.625105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.625123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.625398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.625417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.625592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.625610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.625842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.625860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.626120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.626138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.626389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.626407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.626715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.626733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.626862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.626880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.627147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.627165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.627331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.627350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.627650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.627668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.627926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.627944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.628197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.628214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.628376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.628395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.628626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.628644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.628944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.628961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.629141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.629159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.629392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.629411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.629590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.629609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.789 [2024-07-15 12:59:49.629788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.789 [2024-07-15 12:59:49.629806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.789 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.630037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.630055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.630290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.630309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.630487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.630505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.630683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.630701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.630935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.630953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.631162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.631180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.631370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.631389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.631623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.631641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.631927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.631945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.632198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.632217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.632452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.632470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.632579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.632597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.632848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.632866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.633055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.633073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.633372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.633394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.633576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.633594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.633688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.633706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.633880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.633897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.634128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.634146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.634340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.634358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.634590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.634608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.634883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.634901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.635157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.635175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.635423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.635442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.635660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.635677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.635845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.635863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.636101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.636119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.636233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.636251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.636505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.636524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.636758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.636776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.636923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.636941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.637198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.637216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.637462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.637481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.637742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.637761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.638001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.638018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.638252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.638277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.638441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.638460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.638721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.638739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.639008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.639026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.639208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.639226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.639526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.790 [2024-07-15 12:59:49.639545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.790 qpair failed and we were unable to recover it. 00:29:57.790 [2024-07-15 12:59:49.639739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.639757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.639869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.639887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.640059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.640077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.640275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.640294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.640468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.640486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.640766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.640783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.640882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.641132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.641151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.641263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.641282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.641446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.641463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.641639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.641658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.641833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.641851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.642087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.642105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.642364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.642386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.642626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.642644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.642908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.642926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.643163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.643180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.643430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.643449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.643614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.643632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.643797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.643814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.644027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.644045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.644282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.644301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.644530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.644548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.644730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.644748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.644919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.644937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.645199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.645217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.645448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.645467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.645585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.645603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.645836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.645853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.646027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.646045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.646282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.646300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.646547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.646796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.646814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.647075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.647093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.647345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.647364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.647470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.647488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.647596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.647614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.647801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.647819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.648112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.648129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.648389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.648408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.648670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.648690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.648853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.648871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.649079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.791 [2024-07-15 12:59:49.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.791 qpair failed and we were unable to recover it. 00:29:57.791 [2024-07-15 12:59:49.649329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.649347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.649596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.649615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.649808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.649826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.650004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.650022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.650280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.650299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.650574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.650592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.650785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.650802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.650984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.651002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.651262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.651281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.651575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.651835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.651853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.652119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.652136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.652300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.652319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.652578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.652597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.652777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.652795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.652976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.652995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.653242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.653267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.653563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.653582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.653771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.653789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.654025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.654042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.654241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.654263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.654530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.654741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.654759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.654938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.654957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.655217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.655235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.655448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.655467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.655703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.655721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.655956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.655974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.656211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.656229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.656531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.656550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.656716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.656733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.656995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.657013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.657275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.657294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.657499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.657517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.657695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.657713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.657999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.658017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.658251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.658274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.658554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.658575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.658691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.658708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.658942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.659221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.659239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.659506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.659525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.659738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.659756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.659992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.660010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.660182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.660200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.660462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.660481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.792 qpair failed and we were unable to recover it. 00:29:57.792 [2024-07-15 12:59:49.660597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.792 [2024-07-15 12:59:49.660615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.660866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.661120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.661138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.661422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.661440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.661622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.661641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.661811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.661829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.661990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.662007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.662269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.662288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.662468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.662486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.662778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.662796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.663064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.663082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.663322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.663341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.663600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.663618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.663804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.663822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.664046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.664308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.664327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.664513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.664531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.664693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.664711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.664976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.664994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.665205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.665223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.665434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.665453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.665644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.665662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.665919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.665937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.666188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.666206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.666380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.666399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.666660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.666679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.666873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.666891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.667077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.667095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.667362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.667381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.667639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.667658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:57.793 [2024-07-15 12:59:49.667925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.793 [2024-07-15 12:59:49.667943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:57.793 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.668250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.668283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.668559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.668579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.668881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.668899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.669094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.669113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.669373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.669393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.669621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.669640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.669829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.669847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.670132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.670150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.670359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.670378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.670589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.670608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.670919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.670938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.671229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.671248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.671506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.671525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.671721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.671739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.672022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.672229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.672464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.075 [2024-07-15 12:59:49.672705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.075 [2024-07-15 12:59:49.672728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.672727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.075 [2024-07-15 12:59:49.672748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 [2024-07-15 12:59:49.672750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.075 [2024-07-15 12:59:49.672913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.672931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.672907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:58.075 [2024-07-15 12:59:49.673019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:58.075 [2024-07-15 12:59:49.673166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.673183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.673147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:58.075 [2024-07-15 12:59:49.673152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:58.075 [2024-07-15 12:59:49.673447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.673465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.673639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.673657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.673862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.673880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.674144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.674163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.674347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.674366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.674601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.674619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.674879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.674897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.675148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.675166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.675447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.675466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.675730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.675749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.675963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.675981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.676151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.676170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.075 [2024-07-15 12:59:49.676448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.075 [2024-07-15 12:59:49.676467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.075 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.676664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.676683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.676946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.676965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.677179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.677197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.677402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.677421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.677685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.677705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.677941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.677959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.678222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.678240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.678438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.678458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.678723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.678741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.678976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.678995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.679241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.679266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.679527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.679545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.679740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.679759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.679866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.679884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.680146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.680164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.680425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.680445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.680626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.680644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.680904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.681135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.681424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.681443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.681712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.681730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.681966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.681985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.682238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.682263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.682516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.682534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.682721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.682739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.683028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.683047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.683327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.683347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.683664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.683683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.683973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.683991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.684269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.684289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.684480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.684499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.684767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.684786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.684975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.684993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.685085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.685104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.685366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.685386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.685557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.685577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.685778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.685984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.686002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.686294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.686314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.686578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.076 [2024-07-15 12:59:49.686596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.076 qpair failed and we were unable to recover it. 00:29:58.076 [2024-07-15 12:59:49.686777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.686795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.687081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.687101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.687352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.687372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.687645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.687664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.687850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.687869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.687968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.687986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.688202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.688221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.688479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.688499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.688797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.688816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.689105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.689124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.689334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.689354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.689532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.689551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.689833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.689852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.690124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.690142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.690396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.690416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.690603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.690622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.690916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.690934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.691200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.691224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.691449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.691469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.691723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.691741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.692010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.692029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.692238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.692272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.692443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.692462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.692699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.692718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.692904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.692923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.693224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.693242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.693505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.693524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.693782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.693801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.694092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.694111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.694350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.694370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.694610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.694629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.694802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.694822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.695087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.695107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.695374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.695394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.695659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.695678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.695845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.695864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.696133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.696153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.696406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.696427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.696697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.696717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.696887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.696906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.077 qpair failed and we were unable to recover it. 00:29:58.077 [2024-07-15 12:59:49.697076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.077 [2024-07-15 12:59:49.697096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.697206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.697225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.697501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.697523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.697833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.697853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.698100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.698119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.698323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.698344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.698633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.698653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.698890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.698909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.699095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.699114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.699373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.699394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.699688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.699707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.700020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.700039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.700332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.700351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.700553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.700826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.700846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.701113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.701132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.701412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.701433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.701629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.701653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.701916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.701936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.702176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.702195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.702390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.702410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.702707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.702727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.702915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.702934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.703200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.703220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.703498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.703519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.703783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.703802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.704139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.704158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.704432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.704452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.704775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.704794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.705015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.705034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.705323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.705344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.705610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.705630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.705871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.705891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.706153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.706171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.706418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.706439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.706708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.706728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.707025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.707044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.707285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.707305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.707488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.707507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.707765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.707785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.707913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.078 [2024-07-15 12:59:49.707931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.078 qpair failed and we were unable to recover it. 00:29:58.078 [2024-07-15 12:59:49.708117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.708136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.708305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.708324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.708561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.708579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.708850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.708869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.709114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.709134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.709423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.709443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.709717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.709737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.709920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.709939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.710157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.710177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.710407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.710428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.710669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.710688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.710874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.710893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.711177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.711197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.711464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.711484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.711598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.711617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.711885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.711904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.712194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.712217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.712387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.712406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.712648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.712935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.712955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.713135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.713153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.713394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.713414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.713656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.713676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.713944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.713963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.714211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.714230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.714430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.714450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.714694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.714713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.714986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.715004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.715270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.715290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.715560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.715579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.715797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.715816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.716010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.716028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.079 [2024-07-15 12:59:49.716270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.079 [2024-07-15 12:59:49.716290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.079 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.716572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.716591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.716870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.716889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.717152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.717172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.717411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.717430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.717692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.717712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.717953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.717972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.718236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.718261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.718555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.718575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.718869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.718888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.719156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.719176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.719450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.719469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.719651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.719670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.719932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.719951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.720242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.720269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.720514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.720533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.720795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.720814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.721052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.721071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.721321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.721341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.721578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.721597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.721836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.721855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.721952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.722228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.722247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.722524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.722543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.722726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.722745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.723016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.723035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.723322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.723341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.723593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.723613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.723781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.723800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.723971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.723989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.724154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.724173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.724453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.724473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.724737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.724755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.725022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.725040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.725329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.725349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.725578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.725597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.725841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.725859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.726125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.726143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.726439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.726458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.726645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.726664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.726929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.726948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.080 qpair failed and we were unable to recover it. 00:29:58.080 [2024-07-15 12:59:49.727133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.080 [2024-07-15 12:59:49.727152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.727364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.727384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.727621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.727640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.727884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.727902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.728112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.728131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.728314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.728334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.728576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.728596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.728778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.728797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.729012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.729270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.729289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.729599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.729623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.729862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.729880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.730145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.730164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.730424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.730444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.730630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.730649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.730882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.730902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.731168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.731188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.731435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.731454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.731621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.731639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.731874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.731893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.732188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.732206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.732450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.732469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.732734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.732752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.732878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.732897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.733101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.733120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.733381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.733400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.733610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.733628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.733896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.733914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.734080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.734098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.734335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.734355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.734546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.734746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.734923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.734941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.735128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.735146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.735318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.735337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.735529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.735548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.735794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.735812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.736118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.736136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.736375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.736393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.736667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.736686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.736943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.081 [2024-07-15 12:59:49.736961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.081 qpair failed and we were unable to recover it. 00:29:58.081 [2024-07-15 12:59:49.737263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.737282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.737518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.737537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.737800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.737818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.737949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.737968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.738067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.738085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.738350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.738370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.738630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.738649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.738914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.739166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.739184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.739429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.739451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.739731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.739748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.740014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.740033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.740293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.740311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.740582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.740600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.740814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.740832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.740929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.740947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.741112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.741129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.741244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.741269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.741505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.741523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.741709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.741727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.741935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.741954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.742217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.742236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.742405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.742424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.742690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.742709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.742970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.742989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.743153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.743171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.743446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.743466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.743760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.743778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.744037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.744057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.744249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.744274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.744456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.744475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.744758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.744776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.745042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.745060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.745227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.745244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.745543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.745562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.745807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.745826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.746032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.746051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.746228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.746246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.746423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.746442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.746653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.746671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.082 [2024-07-15 12:59:49.746943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.082 [2024-07-15 12:59:49.746962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.082 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.747197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.747215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.747376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.747397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.747660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.747678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.747877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.747896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.748158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.748177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.748466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.748485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.748733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.748752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.748989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.749007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.749270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.749293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.749539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.749556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.749790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.749809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.750071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.750089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.750358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.750379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.750576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.750595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.750801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.750820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.751006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.751024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.751311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.751332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.751574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.751593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.751706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.751725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.751900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.751917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.752183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.752203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.752494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.752514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.752811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.752831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.753086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.753104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.753362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.753382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.753620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.753640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.753868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.753888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.754065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.754083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.754339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.754359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.754612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.754631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.754815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.754833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.755013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.755031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.755217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.755235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.755498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.755518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.083 [2024-07-15 12:59:49.755848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.083 [2024-07-15 12:59:49.755867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.083 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.756053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.756071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.756251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.756279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.756542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.756561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.756749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.756767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.757017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.757035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.757301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.757486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.757504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.757777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.757796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.758011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.758030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.758295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.758314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.758533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.758551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.758810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.758828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.759084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.759102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.759269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.759291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.759598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.759616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.759710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.759728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.759993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.760012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.760244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.760268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.760564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.760582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.760747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.760766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.761026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.761045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.761225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.761243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.761514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.761699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.761717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.761902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.762164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.762183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.762374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.762393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.762674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.762694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.762883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.762901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.763189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.763207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.763467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.763486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.763667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.763685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.763974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.763993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.764163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.764181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.764385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.764404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.764588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.764606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.764769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.764788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.765075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.765093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.765400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.765419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.765708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.765726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.084 qpair failed and we were unable to recover it. 00:29:58.084 [2024-07-15 12:59:49.766030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.084 [2024-07-15 12:59:49.766048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.766317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.766335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.766443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.766461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.766643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.766661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.766944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.766962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.767239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.767261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.767393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.767411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.767670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.767898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.767915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.768159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.768177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.768356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.768375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.768632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.768650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.768894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.768912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.769095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.769116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.769296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.769314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.769480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.769497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.769672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.769690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.769850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.769867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.770099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.770117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.770392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.770410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.770623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.770641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.770872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.770890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.771156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.771174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.771420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.771439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.771726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.771743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.772005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.772023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.772274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.772293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.772570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.772588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.772763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.772781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.773018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.773036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.773146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.773164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.773343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.773362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.773647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.773665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.773848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.773866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.774127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.774145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.774319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.774337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.774596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.774614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.774868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.774886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.775115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.775133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.775393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.775411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.085 [2024-07-15 12:59:49.775659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.085 [2024-07-15 12:59:49.775677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.085 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.775931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.775949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.776214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.776232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.776401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.776420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.776679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.776696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.776933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.776951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.777225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.777243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.777428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.777719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.777737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.777872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.777890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.778144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.778161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.778365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.778384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.778645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.778663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.778892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.778913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.779157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.779175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.779385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.779601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.779619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.779879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.779897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.780161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.780179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.780411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.780429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.780673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.780691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.780922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.780939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.781168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.781186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.781388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.781406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.781692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.781710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.781890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.781908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.782029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.782047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.782282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.782301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.782465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.782483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.782739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.782756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.782954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.782971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.783199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.783217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.783470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.783488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.783742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.783760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.784035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.784053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.784311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.784330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.784441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.784458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.784622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.784639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.784885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.785066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.785084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.785350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.785369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.086 qpair failed and we were unable to recover it. 00:29:58.086 [2024-07-15 12:59:49.785628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.086 [2024-07-15 12:59:49.785646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.785932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.785950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.786145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.786164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.786364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.786383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.786641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.786659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.786891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.786908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.787086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.787103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.787333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.787351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.787542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.787560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.787819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.787837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.788043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.788062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.788186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.788204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.788464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.788486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.788743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.788760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.788923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.788940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.789125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.789143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.789384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.789403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.789701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.789719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.789918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.789936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.790194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.790212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.790394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.790413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.790700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.790718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.790947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.790965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.791249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.791284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.791448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.791466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.791696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.791714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.791882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.791899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.792175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.792479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.792497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.792751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.793026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.793043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.793224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.793242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.793433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.793452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.793733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.793751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.793995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.794013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.794249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.794283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.794524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.794541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.087 [2024-07-15 12:59:49.794793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.087 [2024-07-15 12:59:49.794810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.087 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.795081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.795099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.795384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.795404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.795666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.795684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.795852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.795870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.796043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.796061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.796261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.796280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.796533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.796551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.796782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.796799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.796977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.796994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.797247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.797270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.797478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.797495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.797702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.797906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.797924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.798125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.798142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.798402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.798424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.798707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.798725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.798975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.798993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.799225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.799242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.799497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.799516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.799750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.799768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.799998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.800015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.800277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.800296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.800542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.800560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.800846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.800864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.801069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.801087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.801247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.801270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.801441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.801459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.801718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.801736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.801948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.801966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.802164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.802182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.802362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.802380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.802583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.802601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.802855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.802873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.802978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.803194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.803211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.803312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.803330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.088 [2024-07-15 12:59:49.803501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.088 [2024-07-15 12:59:49.803519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.088 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.803753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.803770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.804027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.804044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.804299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.804318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.804487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.804505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.804710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.804728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.804989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.805007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.805270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.805289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.805550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.805567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.805747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.805764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.806054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.806072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.806281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.806299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.806492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.806509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.806795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.806813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.807004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.807022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.807285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.807303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.807535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.807553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.807713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.807731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.807986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.808008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.808267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.808286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.808445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.808463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.808668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.808686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.808918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.808936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.809192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.809210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.809371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.809390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.809706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.809724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.809955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.809973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.810186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.810204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.810367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.810547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.810565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.810831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.810849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.811144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.811162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.811450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.811469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.811667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.811684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.811864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.811882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.812043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.812060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.812319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.812338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.812590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.812608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.812839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.812856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.813117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.813318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.089 [2024-07-15 12:59:49.813338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.089 qpair failed and we were unable to recover it. 00:29:58.089 [2024-07-15 12:59:49.813510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.813528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.813703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.813721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.813967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.813985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.814247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.814276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.814455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.814473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.814731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.814749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.815000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.815018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.815123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.815141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.815422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.815441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.815721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.815739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.815919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.815937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.816192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.816209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.816399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.816418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.816678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.816696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.816977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.816995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.817260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.817279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.817493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.817511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.817691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.817712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.817882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.817899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.818133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.818151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.818407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.818426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.818591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.818609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.818780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.818798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.818976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.818994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.819201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.819218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.819388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.819406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.819664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.819682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.819809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.819827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.820070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.820088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.820268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.820286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.820541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.820559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.820806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.820824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.821963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.821980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.822140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.822157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.822387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.822405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.822588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.090 [2024-07-15 12:59:49.822606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.090 qpair failed and we were unable to recover it. 00:29:58.090 [2024-07-15 12:59:49.822837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.822855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.823030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.823048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.823278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.823296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.823482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.823500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.823672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.823690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.823867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.823885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.824044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.824061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.824303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.824321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.824513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.824531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.824727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.824745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.824948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.824965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.825199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.825217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.825465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.825484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.825742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.825760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.825995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.826013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.826189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.826207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.826411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.826432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.826621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.826639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.826927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.826944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.827055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.827073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.827328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.827347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.827612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.827630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.827793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.827810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.828065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.828083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.828343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.828362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.828596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.828613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.828842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.828859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.829038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.829056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.829313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.829331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.829574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.829592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.829852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.829869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.830118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.830135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.830393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.830412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.830588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.830606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.830879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.830897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.831194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.831211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.831454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.831473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.831734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.831752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.831932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.831950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.832185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.832204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.091 [2024-07-15 12:59:49.832380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.091 [2024-07-15 12:59:49.832398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.091 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.832653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.832671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.832896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.832913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.833173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.833191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.833410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.833429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.833636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.833654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.833829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.833847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.834151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.834168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.834400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.834419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.834663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.834681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.834911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.834929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.835187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.835205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.835436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.835454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.835704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.835721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.835978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.835996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.836252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.836276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.836485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.836506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.836792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.836991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.837008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.837272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.837291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.837554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.837572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.837804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.837821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.838067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.838084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.838351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.838370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.838604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.838622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.838871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.092 [2024-07-15 12:59:49.838889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.092 qpair failed and we were unable to recover it. 00:29:58.092 [2024-07-15 12:59:49.839146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.839395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.839414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.839658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.839676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.839837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.839855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.840042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.840059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.840305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.840324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.840591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.840609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.840896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.840913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.841226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.841244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.841453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.841471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.841718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.841736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.841972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.841989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.842243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.842273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.842514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.842532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.842830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.842848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.843047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.843066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.843227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.843245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.843513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.843532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.843736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.843754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.843956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.843974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.844156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.844174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.844379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.844398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.844660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.844944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.844962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.845123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.845141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.845336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.845355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.845619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.845636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.845908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.845926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.846133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.846151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.846413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.846431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.846666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.846684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.846794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.846813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.847055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.847073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.847342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.847361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.847471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.847773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.847791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.848055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.848072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.093 [2024-07-15 12:59:49.848166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.093 [2024-07-15 12:59:49.848185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.093 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.848364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.848383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.848544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.848562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.848751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.848769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.849024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.849042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.849226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.849244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.849534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.849552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.849732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.849750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.849982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.850001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.850105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.850122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.850359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.850378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.850537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.850554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.850824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.850842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.851089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.851106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.851269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.851287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.851401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.851419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.851696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.851926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.851943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.852185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.852203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.852462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.852482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.852595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.852617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.852849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.852867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.853149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.853167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.853450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.853469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.853562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.853580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.853739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.853757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.854016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.854034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.854270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.854288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.854529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.854546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.854759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.854777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.855056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.855075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.855330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.855350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.855610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.855628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.855803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.855821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.856057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.856075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.856311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.856591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.856609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.856856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.857034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.094 [2024-07-15 12:59:49.857051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.094 qpair failed and we were unable to recover it. 00:29:58.094 [2024-07-15 12:59:49.857233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.857251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.857523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.857541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.857797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.857815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.858029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.858048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.858246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.858269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.858500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.858518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.858823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.858841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.859067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.859085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.859351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.859370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.859534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.859551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.859825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.859844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.860077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.860096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.860265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.860514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.860532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.860725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.860743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.861002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.861019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.861296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.861315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.861546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.861563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.861781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.861799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.862083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.862100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.862339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.862358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.862564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.862588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.862702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.862720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.862985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.863003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.863233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.863251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.863510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.863528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.863779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.863797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.864046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.864064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.864303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.864322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.864599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.864623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.864789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.864807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.095 [2024-07-15 12:59:49.865065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.095 [2024-07-15 12:59:49.865083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.095 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.865356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.865375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.865633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.865651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.865851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.865869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.866070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.866088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.866247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.866271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.866569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.866587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.866760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.866778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.867014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.867031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.867215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.867232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.867369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.867387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.867662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.867681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.867964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.867982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.868237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.868260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.868377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.868395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.868514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.868531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.868656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.868674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.868952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.868970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.869152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.869170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.869450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.869469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.869650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.869669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.869927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.869945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.870176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.870407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.870425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.870532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.870550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.870787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.870804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.871068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.871086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.871334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.871352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.871530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.871548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.871825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.871843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.872022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.872042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.872316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.872335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.872513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.872530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.872707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.872725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.873005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.873302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.873320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.873578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.873595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.873755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.873772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.874030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.874047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.874307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.096 [2024-07-15 12:59:49.874325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.096 qpair failed and we were unable to recover it. 00:29:58.096 [2024-07-15 12:59:49.874591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.874608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.874837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.874854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.875101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.875119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.875224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.875242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.875414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.875432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.875663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.875681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.875841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.875858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.876057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.876075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.876310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.876329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.876523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.876541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.876731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.876749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.877027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.877045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.877314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.877333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.877596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.877614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.877805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.877823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.878085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.878103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.878355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.878374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.878634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.878652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.878865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.878883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.879112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.879130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.879378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.879399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.879664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.879682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.879842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.879860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.880040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.880057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.880240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.880264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.880524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.880542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.880701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.880718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.880884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.097 [2024-07-15 12:59:49.881132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.097 [2024-07-15 12:59:49.881150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.097 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.881343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.881361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.881638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.881659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.881847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.881865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.882049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.882067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.882276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.882295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.882526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.882543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.882793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.882811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.883009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.883027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.883277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.883296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.883530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.883548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.883802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.883819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.884075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.884092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.884345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.884363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.884592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.884610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.884871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.884889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.885066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.885083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.885242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.885267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.885495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.885513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.885781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.885798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.886046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.886064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.886237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.886260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.886573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.886591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.886837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.886855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.887121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.887138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.887319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.887338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.887541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.887559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.887842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.887859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.888120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.888138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.888321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.888340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.888597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.888614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.888903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.888921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.889172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.889190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.889422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.889440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.889617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.889796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.889814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.890069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.890087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.890343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.890362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.890569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.890587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.890796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.890814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.098 qpair failed and we were unable to recover it. 00:29:58.098 [2024-07-15 12:59:49.891103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.098 [2024-07-15 12:59:49.891121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.891347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.891365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.891528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.891549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.891780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.891798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.892026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.892044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.892315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.892334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.892592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.892609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.892841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.892859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.893106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.893124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.893383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.893402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.893662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.893680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.893854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.893872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.894079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.894097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.894353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.894372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.894485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.894503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.894760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.894778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.894950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.894969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.895248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.895284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.895586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.895604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.895807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.895825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.896028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.896045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.896205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.896223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.896468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.896486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.896675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.896693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.896882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.896899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.897100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.897117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.897374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.897393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.897552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.897570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.897826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.897844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.898019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.898037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.898306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.898324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.898504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.898521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.898803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.898821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.898929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.898947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.899175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.899193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.899454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.899472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.099 [2024-07-15 12:59:49.899664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.099 [2024-07-15 12:59:49.899681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.099 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.899805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.899823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.899995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.900013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.900210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.900228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.900436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.900454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.900720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.900738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.901003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.901024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.901259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.901277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.901518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.901537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.901767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.901785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.901984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.902115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.902293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.902472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.902615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.902919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.902937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.903118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.903136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.903402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.903420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.903655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.903674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.903771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.903788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.903978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.903996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.904223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.904241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.904495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.904513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.904770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.904787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.905036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.905053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.905213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.905231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.905434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.905452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.905708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.905725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.906009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.906027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.906267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.906285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.906523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.906540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.906773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.906791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.906982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.907277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.907296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.907552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.907683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.907701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.907935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.907952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.908197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.908215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.908407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.908426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.908718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.908735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.908910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.908928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.100 qpair failed and we were unable to recover it. 00:29:58.100 [2024-07-15 12:59:49.909132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.100 [2024-07-15 12:59:49.909150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.909411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.909429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.909659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.909677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.909838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.909856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.910111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.910129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.910390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.910414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.910677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.910694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.910798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.910816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.910921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.910938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.911191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.911208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.911385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.911404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.911575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.911593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.911837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.911855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.912136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.912153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.912438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.912456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.912581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.912599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.912763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.912781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.912943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.912961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.913213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.913231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.913423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.913442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.913545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.913563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.913767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.913784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.914072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.914090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.914251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.914274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.914547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.914565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.914855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.914872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.915122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.915140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.915320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.915339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.915613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.915631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.101 [2024-07-15 12:59:49.915913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.101 [2024-07-15 12:59:49.915931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.101 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.916188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.916206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.916464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.916482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.916773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.916791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.917048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.917065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.917263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.917282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.917462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.917480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.917711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.917730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.917911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.918092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.918109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.918358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.918376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.918638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.918656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.918941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.918958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.919148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.919166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.919423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.919442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.919689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.919707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.919940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.920164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.920181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.920448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.920466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.920595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.920613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.920871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.920889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.921172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.921190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.921449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.921468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.921664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.921681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.921912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.921930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.922095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.922112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.922343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.922362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.922591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.922609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.922787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.922805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.923154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.923444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.923464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.923689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.923950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.923968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.924197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.924215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.924465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.924483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.924718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.924735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.925022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.925039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.925295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.925314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.925571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.925589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.925759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.925777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.925960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.102 [2024-07-15 12:59:49.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.102 qpair failed and we were unable to recover it. 00:29:58.102 [2024-07-15 12:59:49.926236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.926258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.926439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.926456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.926713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.926731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.926966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.926984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.927158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.927175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.927384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.927403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.927596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.927614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.927725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.927742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.927948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.927966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.928195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.928213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.928505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.928524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.928703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.928721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.929006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.929023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.929286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.929305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.929494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.929512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.929686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.929707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.929975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.929993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.930173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.930191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.930368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.930387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.930623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.930823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.930841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.931152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.931170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.931278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.931297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.931583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.931601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.931824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.931841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.932053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.932070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.932337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.932356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.932505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.932523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.932683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.932700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.932887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.932905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.933074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.933092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.933350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.933369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.933594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.933611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.933865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.933883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.934090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.934107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.934393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.103 [2024-07-15 12:59:49.934411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.103 qpair failed and we were unable to recover it. 00:29:58.103 [2024-07-15 12:59:49.934587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.934839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.934857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.935049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.935068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.935342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.935361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.935538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.935556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.935787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.935805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.936038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.936056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.936324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.936622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.936639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.936894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.936912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.937193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.937211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.937465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.937484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.937719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.937737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.937986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.938004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.938166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.938184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.938418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.938437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.938694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.938712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.938962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.938980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.939142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.939160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.939342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.939364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.939544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.939561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.939820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.939838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.940090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.940108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.940271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.940290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.940520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.940538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.940718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.940735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.940933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.940951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.941208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.941226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.941490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.941509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.941754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.941772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.942006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.942024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.942283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.942302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.942540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.942558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.942734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.942751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.942928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.942946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.943207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.943224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.943500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.943520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.943749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.943767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.944020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.944037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.944297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.944316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.104 [2024-07-15 12:59:49.944575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.104 [2024-07-15 12:59:49.944592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.104 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.944802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.944820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.945023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.945040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.945296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.945314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.945429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.945447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.945712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.945730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.945939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.945958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.946138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.946156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.946333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.946352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.946597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.946616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.946823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.946841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.947050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.947067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.947328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.947346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.947608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.947626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.947801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.947819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.947994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.948012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.948267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.948286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.948465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.948483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.948739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.948756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.949010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.949031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.949314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.949333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.949586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.949604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.949771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.949789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.950023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.950041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.950299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.950318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.950573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.950590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.950834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.950852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.951069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.951087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.951309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.951328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.951437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.951454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.951712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.951731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.951952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.951970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.952172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.952190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.952474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.952492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.952746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.952764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.952886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.952904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.953162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.953180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.953461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.953610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.953628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.953759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.953776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.953972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.953990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.105 qpair failed and we were unable to recover it. 00:29:58.105 [2024-07-15 12:59:49.954150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.105 [2024-07-15 12:59:49.954168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.954421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.954441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.954621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.954639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.954900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.954918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.955151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.955169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.955453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.955472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.955729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.955747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.956005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.956024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.956271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.956290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.956545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.956563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.956723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.956741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.956915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.956933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.957187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.957205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.957380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.957399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.957602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.957621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.957856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.957874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.958050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.958067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.958329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.958347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.958536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.958559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.958741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.958931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.958948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.959127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.959144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.959324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.959343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.959578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.959596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.959906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.959924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.960089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.960107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.960287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.960306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.960551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.960569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.960802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.960820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.961007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.961336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.961355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.961551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.961570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.961808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.961826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.106 [2024-07-15 12:59:49.962029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.106 [2024-07-15 12:59:49.962046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.106 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.962224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.962242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.962452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.962470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.962633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.962651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.962831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.962849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.963018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.963036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.963302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.963321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.963507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.963525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.963788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.963806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.964060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.964078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.964282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.964300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.964476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.964495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.964660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.964678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.964929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.964947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.965204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.965222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.965479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.965497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.965748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.965766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.965891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.965909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.966079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.966097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.966281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.966301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.966438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.966456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.966687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.966705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.966877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.966895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.967195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.967213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.967318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.967337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.967503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.967524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.967774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.967792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.968120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.968138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.968318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.968337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.968532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.968551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.968779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.968798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.968916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.968933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.969166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.969183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.969277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.969296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.969526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.969544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.969780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.969798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.970070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.970088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.970253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.970278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.970445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.970463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.970660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.970678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.970935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.970953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.107 [2024-07-15 12:59:49.971240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.107 [2024-07-15 12:59:49.971264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.107 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.971480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.971499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.971712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.971730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.971907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.971925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.972183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.972201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.972396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.972414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.972582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.972600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.972896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.972914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.973174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.973192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.973450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.973469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.973651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.973669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.973916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.973937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.974130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.974148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.974393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.974516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.974534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.974706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.974725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.974900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.974918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.975089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.975108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.975269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.975287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.975480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.975498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.975726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.975744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.975910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.975928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.976165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.976183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.976395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.976414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.976645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.976663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.976892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.977186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.977204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.977336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.977355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.977541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.977558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.977784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.978088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.978105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.978336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.978534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.978552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.978812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.978829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.979080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.979263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.979281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.979491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.979509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.979700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.979717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.979849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.979866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.980048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.980065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.980245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.108 [2024-07-15 12:59:49.980270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.108 qpair failed and we were unable to recover it. 00:29:58.108 [2024-07-15 12:59:49.980486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.980505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.980693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.980710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.981030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.981048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.981329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.981349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.981520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.981537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.981767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.981786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.981990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.982008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.982312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.982331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.982561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.982580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.982770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.982789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.983088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.983109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.983317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.983513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.983741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.983759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.984024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.984042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.984217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.984234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.984477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.984496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.984682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.984700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.984879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.984896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.985093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.985111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.985344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.985363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.985617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.985635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.985888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.985905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.986079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.986277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.986296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.986504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.986522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.986708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.986726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.986924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.986942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.987173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.987190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.987298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.987317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.987572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.987590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.987822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.987840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.988091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.988108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.988337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.988355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.988490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.988508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.988768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.988787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.988913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.988931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.989213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.989230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.989455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.989474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.989645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.989663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.989962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.109 [2024-07-15 12:59:49.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.109 qpair failed and we were unable to recover it. 00:29:58.109 [2024-07-15 12:59:49.990268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.990287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.990524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.990542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.990703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.990721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.990962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.990980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.991324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.991344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.991607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.991625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.991737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.991756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.992076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.992094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.992325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.110 [2024-07-15 12:59:49.992344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.110 qpair failed and we were unable to recover it. 00:29:58.110 [2024-07-15 12:59:49.992540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.992561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.992823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.992844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.993131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.993149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.993395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.993413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.993665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.993683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.993921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.993939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.994185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.994203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.994382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.994400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.994578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.395 [2024-07-15 12:59:49.994597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.395 qpair failed and we were unable to recover it. 00:29:58.395 [2024-07-15 12:59:49.994779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.994797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.995059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.995077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.995250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.995290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.995469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.995487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.995717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.995735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.995846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.995864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.996024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.996042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.996297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.996316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.996555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.996574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.996739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.996757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.997010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.997028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.997310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.997331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.997513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.997531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.997762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.997780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.998010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.998028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.998295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.998314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.998529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.998547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.998730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.998747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.998932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.998950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.999059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.999078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.999272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.999292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.999409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.999427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.999690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.999708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:49.999894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:49.999912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.000158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.000176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.000390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.000410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.000603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.000622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.000748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.000766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.001047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.001065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.001238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.001263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.001477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.001495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.001725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.001747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.001924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.001942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.002204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.002221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.002422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.002441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.002678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.002696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.002812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.002829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.003039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.003057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.003333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.003353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.003539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.003558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.396 [2024-07-15 12:59:50.003699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.396 [2024-07-15 12:59:50.003718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.396 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.003893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.003911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.004011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.004029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.004209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.004228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.004356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.004375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.004568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.004768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.004786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.005943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.005961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.006272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.006291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.006458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.006477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.006659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.006677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.006840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.006858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.007139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.007157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.007358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.007377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.007559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.007576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.007752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.007771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.007966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.007984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.008144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.008161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.008348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.008367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.008488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.008506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.008684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.008702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.008897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.008915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.009088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.009106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.009301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.009320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.009551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.009569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.009750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.009767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.009932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.009953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.010199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.010218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.010407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.010426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.010543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.010561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.010754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.010772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.010877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.010896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.011122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.397 [2024-07-15 12:59:50.011140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.397 qpair failed and we were unable to recover it. 00:29:58.397 [2024-07-15 12:59:50.011344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.011363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.011570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.011589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.011794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.011812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.012012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.012029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.012140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.012157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.012395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.012415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.012670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.012688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.012875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.012893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.013965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.013984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.014236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.014261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.014520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.014538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.014701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.014718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.014898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.014916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.015100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.015118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.015400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.015419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.015612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.015630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.015862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.015880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.016088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.016106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.016214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.016232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.016354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.016373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.016655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.016673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.016917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.016935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.017142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.017160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.017395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.017414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.017684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.017702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.017907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.017925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.018190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.018207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.018421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.018439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.018602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.018623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.018753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.018899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.018917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.019111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.019129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.398 [2024-07-15 12:59:50.019317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.398 [2024-07-15 12:59:50.019336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.398 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.019500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.019518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.019647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.019665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.019781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.019799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.020018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.020036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.020195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.020213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.020471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.020490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.020649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.020667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.020890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.020908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.021943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.021961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.022192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.022213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.022415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.022434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.022534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.022552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.022702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.022720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.022907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.022924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.023179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.023197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.023448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.023467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.023575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.023593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.023750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.023769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.023987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.024261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.024456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.024611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.024757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.024960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.024978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.025087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.025106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.025284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.025303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.025480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.025498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.025675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.399 [2024-07-15 12:59:50.025693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.399 qpair failed and we were unable to recover it. 00:29:58.399 [2024-07-15 12:59:50.025824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.025842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.026027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.026045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.026249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.026278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.026402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.026420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.026610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.026628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.026755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.026773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.027933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.027951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.028097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.028115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.028264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.028282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.028544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.028562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.028954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.028972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.029222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.029240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.029626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.029644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.029780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.029797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.029927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.029944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.030976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.030994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.031128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.031428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.031448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.400 qpair failed and we were unable to recover it. 00:29:58.400 [2024-07-15 12:59:50.031709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.400 [2024-07-15 12:59:50.031727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.031973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.032174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.032192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.032398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.032648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.032666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.032873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.032891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.033166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.033183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.033311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.033330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.033597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.033615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.033817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.033834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.034962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.034980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.035108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.035126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.035230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.035248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.035603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.035622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.035920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.035938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.036039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.036057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.036246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.036272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.036525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.036543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.036760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.036778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.037005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.037023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.037222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.037239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.037438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.037457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.037559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.037576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.037824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.037842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.038082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.038100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.038298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.038317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.038514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.038531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.038858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.038875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.039110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.039128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.401 [2024-07-15 12:59:50.039318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-07-15 12:59:50.039337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.401 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.039542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.039560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.039802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.039821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.039940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.039958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.040062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.040080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.040248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.040274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.040503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.040520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.040696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.040714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.040905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.040922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.041178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.041196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.041383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.041402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.041593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.041611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.041745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.041762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.041874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.041892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.042018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.042037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.042265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.042283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.042500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.042518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.042677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.042695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.042932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.042954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.043251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.043278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.043431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.043448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.043649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.043667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.043867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.043885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.044091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.044108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.044305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.044324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.044560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.044578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.044688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.044706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.044817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.044834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.045035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.045053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.045250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.045275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.045482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.045500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.045695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.045713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.045901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.045919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.046031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.046048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.046280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.046298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.046570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.046588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.046847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.046864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.047027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-07-15 12:59:50.047046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.402 qpair failed and we were unable to recover it. 00:29:58.402 [2024-07-15 12:59:50.047326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.047344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.047593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.047610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.047896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.047914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.048017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.048035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.048200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.048218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.048501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.048520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.048695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.048713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.049017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.049035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.049171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.049189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.049466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.049485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.049683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.049701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.049932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.049949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.050183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.050201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.050341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.050359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.050603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.050622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.050881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.050899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.051129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.051147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.051411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.051429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.051556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.051574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.051830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.051847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.052114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.052135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.052397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.052415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.052611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.052629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.052850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.052868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.053053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.053070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.053361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.053380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.053557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.053574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.053866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.053885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.054177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.054195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.054440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.054459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.054642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.054660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.054781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.054799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.055058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.055076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.055372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.055391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.403 qpair failed and we were unable to recover it. 00:29:58.403 [2024-07-15 12:59:50.055631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-07-15 12:59:50.055649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.055824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.055842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.056106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.056329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.056508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.056620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.056809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.056986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.057198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.057380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.057559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.057693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.057928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.057947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.058158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.058176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.058374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.058394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.058568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.058751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.058769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.059053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.059070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.059228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.059246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.059430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.059448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.059623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.059641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.059872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.059890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.060882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.060901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.061156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.061174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.061336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.061356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.061526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.061543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.061652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.061670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.061859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.061877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.062146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.062164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.062286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.062305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.062562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.062580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.062860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.404 [2024-07-15 12:59:50.062878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.404 qpair failed and we were unable to recover it. 00:29:58.404 [2024-07-15 12:59:50.063139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.063262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.063281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.063463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.063481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.063685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.063703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.063899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.063918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.064114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.064132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.064241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.064274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.064384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.064402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.064659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.064676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.064953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.064971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.065269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.065288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.065584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.065603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.065717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.065734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.065911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.065929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.066218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.066236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.066433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.066452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.066718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.066735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.066958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.066976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.067228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.067246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.067404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.067422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.067601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.067619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.067745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.068021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.068039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.068265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.068284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.068484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.068502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.068773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.068791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.069055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.069072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.069332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.069590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.405 [2024-07-15 12:59:50.069608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.405 qpair failed and we were unable to recover it. 00:29:58.405 [2024-07-15 12:59:50.069785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.069807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.070091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.070108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.070287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.070305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.070482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.070500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.070762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.070781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.071069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.071088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.071291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.071310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.071524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.071542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.071796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.071814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.071924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.071941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.072105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.072123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.072323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.072342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.072515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.072533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.072786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.072804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.073003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.073021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.073214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.073231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.073440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.073459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.073715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.073734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.073955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.073972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.074103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.074363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.074382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.074583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.074601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.074807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.074825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.075085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.075103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.075276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.075294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.075488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.075506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.075685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.075703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.075936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.075955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.076144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.076161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.076289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.076308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.076564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.076583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.076748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.076766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.077026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.077044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.077332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.077350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.077580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.077598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.077838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.077856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.078121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.078139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.078434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.078453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.078715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.078733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.079042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.079060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.406 [2024-07-15 12:59:50.079352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.406 [2024-07-15 12:59:50.079374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.406 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.079557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.079575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.079836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.079854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.080164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.080182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.080416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.080434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.080693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.080871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.081091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.081109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.081395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.081414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.081594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.081896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.081914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.082141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.082159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.082395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.082415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.082587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.082605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.082720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.082738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.082939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.082957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.083130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.083148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.083393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.083412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.083644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.083661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.083843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.083861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.084126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.084144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.084340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.084359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.084570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.084588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.084840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.085064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.085082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.085367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.085386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.085505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.085523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.085731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.085752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.085934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.085952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.086079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.086097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.086197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.086215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.086499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.086518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.086706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.086724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.087032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.087050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.087229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.087247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.087549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.087568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.087683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.087700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.087864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.087882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.088153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.088171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.088333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.088352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.088540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.088559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.407 [2024-07-15 12:59:50.088735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.407 [2024-07-15 12:59:50.088753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.407 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.088876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.088893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.089138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.089155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.089395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.089414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.089693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.089712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.089882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.089900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.090185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.090203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.090382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.090401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.090592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.090609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.090788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.090806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.091091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.091109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.091296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.091315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.091476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.091493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.091741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.091759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.092007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.092024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.092275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.092294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.092478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.092496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.092730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.092748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.092949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.092967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.093215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.093233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.093381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.093400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.093562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.093580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.093755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.093998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.094016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.094280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.094300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.094559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.094576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.094721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.094742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.095071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.095088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.095316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.095335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.095455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.095473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.095759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.095778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.095951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.095968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.096221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.096239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.096583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.096658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.096913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.096951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.097102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.097138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.097364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.097402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75c8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.097549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.097569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.097674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.097692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.097892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.097909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.098090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.098108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.408 [2024-07-15 12:59:50.098339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.408 [2024-07-15 12:59:50.098358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.408 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.098532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.098550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.098772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.098790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.098898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.098916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.099075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.099092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.099284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.099302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.099507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.099525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.099822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.099840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.100029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.100047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.100229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.100246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.100444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.100462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.100693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.100711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.100885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.100903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.101150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.101168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.101276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.101295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.101463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.101481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.101644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.101661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.101855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.101873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.102962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.102979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.103089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.103107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.103310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.103332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.103592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.103610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.103706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.103724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.103966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.103984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.409 qpair failed and we were unable to recover it. 00:29:58.409 [2024-07-15 12:59:50.104935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.409 [2024-07-15 12:59:50.104954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.105226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.105244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.105404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.105422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.105526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.105544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.105722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.105741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.105976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.105994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.106977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.106995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.107327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.107345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.107513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.107531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.107661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.107678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.107858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.107876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.108167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.108185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.108346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.108365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.108550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.108569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.108853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.108871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.108985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.109117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.109321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.109451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.109641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.109830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.109848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.110040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.110180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.110394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.110518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.110819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.110997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.111018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.111183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.111200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.111405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.111424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.111599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.111616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.111779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.111797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.410 qpair failed and we were unable to recover it. 00:29:58.410 [2024-07-15 12:59:50.112867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.410 [2024-07-15 12:59:50.112885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.113958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.114075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.114093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.114269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.114287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.114452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.114647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.114664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.114911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.114929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.115107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.115291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.115481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.115612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.115874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.115998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.116974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.116991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.117821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.117842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.118924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.118942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.119049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.119067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.119243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.119280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.119385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.411 [2024-07-15 12:59:50.119403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.411 qpair failed and we were unable to recover it. 00:29:58.411 [2024-07-15 12:59:50.119591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.119609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.119715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.119733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.119836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.119853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.119959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.119978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.120087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.120105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.120272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.120507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.120525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.120765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.120783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.121914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.121932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.122099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.122116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.122364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.122383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.122586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.122604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.122834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.122852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.122960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.122978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.123180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.123199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.123312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.123331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.123539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.123557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.123777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.123917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.123935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.124022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.124039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.124137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.124156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.124356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.124375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.124546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.124564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.124796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.124814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.125935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.125953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.126079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.126097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.126302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.126321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.126489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.126507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.126650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.126668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.126885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.126903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.127001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.412 [2024-07-15 12:59:50.127019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-15 12:59:50.127147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.127285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.127496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.127609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.127744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.127928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.127946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.128885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.128902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.129076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.129094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.129322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.129340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.129543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.129561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.129673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.129690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.129897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.129915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.130890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.130908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.131072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.131090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.131184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.131201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.131380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.131398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.131642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.131660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.131848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.131866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.132957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.132974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.133072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.133090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.133348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.413 [2024-07-15 12:59:50.133366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-15 12:59:50.133459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.133477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.133655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.133673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.133788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.133806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.133970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.133988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.134901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.134995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.135946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.135964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.136962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.136980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.137853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.137871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.138931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.138949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.139198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.139216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.139315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.139333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.139442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.139460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-15 12:59:50.139635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.414 [2024-07-15 12:59:50.139653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.139766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.139783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.140945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.140963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.141127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.141146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.141323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.141341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.141513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.141531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.141761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.141779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.141895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.141912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.142899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.142917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.143958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.143976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.144893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.144911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.145142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.145160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.145360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.145379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.145470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.145491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.145591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.145609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.145814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.145832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.415 qpair failed and we were unable to recover it. 00:29:58.415 [2024-07-15 12:59:50.146799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.415 [2024-07-15 12:59:50.146816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.146982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.147945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.147963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.148051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.148070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.148228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.148246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.148501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.148677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.148694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.148896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.148914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.149871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.149890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.150147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.150165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.150332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.150351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.150523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.150541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.150770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.150788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.150893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.150911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.151967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.151988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.152921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.152939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.153176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.153195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.153294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.153313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.416 qpair failed and we were unable to recover it. 00:29:58.416 [2024-07-15 12:59:50.153423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.416 [2024-07-15 12:59:50.153441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.153605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.153623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.153797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.153815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.153985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.154181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.154305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.154428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.154646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.154779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.154796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.155853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.155870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.156950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.156968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.157201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.157220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.157450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.157469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.157636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.157654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.157763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.157780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.157886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.157903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.158186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.158204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.158385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.158404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.158525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.158542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.158709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.158727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.158934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.158956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.159147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.159165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.159350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.159369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.159500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.159518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.159713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.159731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.159838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.159856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.160029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.417 [2024-07-15 12:59:50.160047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.417 qpair failed and we were unable to recover it. 00:29:58.417 [2024-07-15 12:59:50.160151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.160169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.160289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.160308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.160468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.160486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.160660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.160678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.160842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.160860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.161737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.161756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.162815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.162833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.163949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.163967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.164937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.164954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.165117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.418 [2024-07-15 12:59:50.165134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.418 qpair failed and we were unable to recover it. 00:29:58.418 [2024-07-15 12:59:50.165303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.165322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.165485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.165506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.165674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.165692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.165855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.165873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.166978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.166995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.167116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.167134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.167298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.167316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.167527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.167545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.167715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.167732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.167897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.167916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.168085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.168104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.168221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.168239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.168341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.168359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.168535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.168553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.168782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.168799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.169056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.169074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.169353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.169372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.169480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.169498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.169666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.169684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.169941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.169959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.170905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.170922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.171837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.171855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.419 qpair failed and we were unable to recover it. 00:29:58.419 [2024-07-15 12:59:50.172017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.419 [2024-07-15 12:59:50.172035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.172135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.172152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.172323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.172346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.172527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.172545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.172672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.172690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.172811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.172829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.173949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.173967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.174869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.174986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.175903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.175921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.176151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.176169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.176343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.176373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.176474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.176494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.176750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.176768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.176870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.176887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.177872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.177889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.420 qpair failed and we were unable to recover it. 00:29:58.420 [2024-07-15 12:59:50.178070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-15 12:59:50.178088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.178948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.178966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.179972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.179990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.180246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.180271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.180382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.180399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.180570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.180588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.180694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.180712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.180891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.180909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.181036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.181144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.181308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.181575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.181757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.181986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.182097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.182331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.182511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.182629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.182833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.182851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.183928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.183946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.184896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.184913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.421 qpair failed and we were unable to recover it. 00:29:58.421 [2024-07-15 12:59:50.185027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-15 12:59:50.185046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.185851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.185869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.186926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.187945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.187963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.188979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.188997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.189128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.189279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.189403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.189522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.422 [2024-07-15 12:59:50.189710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.422 qpair failed and we were unable to recover it. 00:29:58.422 [2024-07-15 12:59:50.189872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.189889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.189990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.190121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.190310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.190501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.190633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.190755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.191068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.191210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.191344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.191738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.191995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.192900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.192918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.193923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.193941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.194921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.194939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.423 [2024-07-15 12:59:50.195116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.423 [2024-07-15 12:59:50.195134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.423 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.195235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.195252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.195422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.195440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.195553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.195571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.195679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.195702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.195826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.195843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.196884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.196902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.197088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.197105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.197279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.197298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.197459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.197478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.197737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.197755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.197948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.197965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.198139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.198156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.198268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.198286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.198425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.198443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.198620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.198638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.198890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.198908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.199020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.199038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.199249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.199284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.199465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.199483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.199587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.199605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.199773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.199791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.200025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.200283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.200420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.200616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.200807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.200998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.201015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.201203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.201220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.201404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.201423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.201524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.201542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.424 [2024-07-15 12:59:50.201705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.424 [2024-07-15 12:59:50.201723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.424 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.201894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.201912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.202978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.202996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.203974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.204832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.205978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.205996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.206172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.206404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.206423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.206621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.206638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.206739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.206757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.206937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.206955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.207973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.425 [2024-07-15 12:59:50.207991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.425 qpair failed and we were unable to recover it. 00:29:58.425 [2024-07-15 12:59:50.208155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.208339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.208479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.208591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.208733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.208872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.208889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.209068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.209085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.209199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.209220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.209412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.209431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.209682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.209699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.209810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.209827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.210060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.210348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.210539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.210641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.210832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.210994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.211168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.211348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.211566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.211700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.211960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.211978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.212202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.212219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.212457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.212476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.212579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.212598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.212692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.212710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.212808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.212826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.213898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.213916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.214119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.214240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.214481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.214730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.214870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.426 [2024-07-15 12:59:50.214987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.426 [2024-07-15 12:59:50.215005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.426 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.215109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.215126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.215298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.215317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.215599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.215617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.215777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.215795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.216936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.216953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.217180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.217197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.217295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.217314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.217546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.217564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.217769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.217787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.217893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.217910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.218108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.218126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.218312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.218330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.218517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.218535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.218766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.218784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.218959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.218976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.219221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.219240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.219573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.219593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.219792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.219810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.219981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.219999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.220110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.220127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.220230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.220247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.220480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.220499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.220705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.220722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.427 qpair failed and we were unable to recover it. 00:29:58.427 [2024-07-15 12:59:50.220917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.427 [2024-07-15 12:59:50.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.221937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.221954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.222954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.222972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.223094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.223113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.223209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.223227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.223351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.223371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.223565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.223584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.223826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.223848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.224906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.224924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.225037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.225055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.225288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.225307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.225541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.225559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.225664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.225681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.225792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.225809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.226000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.226018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.226128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.226146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.226323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.226342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.226527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.226545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.226776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.226794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.227080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.227098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.227268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.227287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.227397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.227415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.428 [2024-07-15 12:59:50.227523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.428 [2024-07-15 12:59:50.227541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.428 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.227720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.227826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.228028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.228142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.228348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.228543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.228796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.228985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.229232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.229364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.229495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.229775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.229882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.229899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.230868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.230980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.231969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.231986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.232975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.232993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.429 qpair failed and we were unable to recover it. 00:29:58.429 [2024-07-15 12:59:50.233851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.429 [2024-07-15 12:59:50.233868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.233958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.233977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.234096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.234221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.234425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.234717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.234818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.234990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.235928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.235945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.236947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.236965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.237888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.237907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.238892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.238910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.239843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.430 [2024-07-15 12:59:50.240114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.430 [2024-07-15 12:59:50.240132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.430 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.240295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.240314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.240401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.240419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.240520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.240538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.240631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.240649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.240831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.240849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.241873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.241891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.242958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.242975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.243880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.244024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.244155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.244173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.244396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.244415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.244596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.244615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.244813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.244831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.245939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.245957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.246142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.246161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.246272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.246291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.431 [2024-07-15 12:59:50.246496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.431 [2024-07-15 12:59:50.246514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.431 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.246695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.246713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.246876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.246894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.247957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.247975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.248981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.248999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.249125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.249143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.249330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.249349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.249460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.249478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.249693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.249712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.249822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.249840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.250027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.250292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.250416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.250610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.250892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.250990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.251941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.251959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.252065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.252083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.252175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.252193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.252349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.252368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.252549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.252566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.432 [2024-07-15 12:59:50.252773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.432 [2024-07-15 12:59:50.252791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.432 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.252950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.252968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.253149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.253168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.253371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.253494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.253511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.253742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.253760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.253866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.253884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.254151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.254170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.254308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.254326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.254502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.254520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.254616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.254634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.254807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.254824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.255910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.255928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.256959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.256977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.257927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.257944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.258848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.258865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.259040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.259056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.259161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.259179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.259375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.259394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.259638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.259655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.259888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.259906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.433 [2024-07-15 12:59:50.260005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.433 [2024-07-15 12:59:50.260023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.433 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.260954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.260971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.261952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.261970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.262940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.262957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.263131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.263150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.263318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.263337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.263431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.263638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.263657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.263836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.263854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.264781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.264983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.265881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.265899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.266006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.266024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.266134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.266153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.266268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.266286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.266484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.266503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.434 [2024-07-15 12:59:50.266678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.434 [2024-07-15 12:59:50.266696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.434 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.266876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.266895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.267934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.267952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.268115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.268133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.268346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.268365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.268527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.268545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.268646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.268664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.268792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.268810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.269962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.269980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.270822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.270839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.271875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.271892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.272905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.272923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.273154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.273172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.273274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.273293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.273484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.435 [2024-07-15 12:59:50.273502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-07-15 12:59:50.273676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.273694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.273800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.273817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.273934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.273952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.274164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.274183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.274354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.274373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.274480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.274498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.274747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.274765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.274945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.274964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.275163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.275182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.275293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.275311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.275498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.275517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.275686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.275704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.275905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.275926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.276945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.276964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.277841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.277859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.278145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.278163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.278400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.278420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.278588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.278607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.278767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.278785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.278881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.278899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.279950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-07-15 12:59:50.280226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.436 [2024-07-15 12:59:50.280245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.280532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.280550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.280648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.280666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.280770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.280787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.280957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.280975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.281091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.281109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.281273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.281291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.281405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.281423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.281652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.281671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.281907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.281925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.282944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.282963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.283187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.283205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.283381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.283400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.283574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.283592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.283765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.283784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.284922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.284940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.285111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.285130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.285328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.285347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.285451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.285469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.285699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.285718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.285928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.285946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.286117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.286135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.286319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.286338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.286543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.286561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.286742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.286762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.286927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.286945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.287817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.288017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.437 [2024-07-15 12:59:50.288036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-07-15 12:59:50.288215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.288233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.288325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.288344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.288519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.288537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.288713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.288732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.288915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.288934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.289097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.289116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.289223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.289242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.289451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.289469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.289582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.289600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.289790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.289808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.290089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.290391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.290527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.290639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.290785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.290998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.291874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.291997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.292174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.292191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.292292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.292311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.292489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.292507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.292685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.292703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.292986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.293106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.293382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.293498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.293622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.293747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.293765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.294949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.294968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.295198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.295216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.295376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.295394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.295568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.295586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.295816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.295834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.295939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.295957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.438 qpair failed and we were unable to recover it. 00:29:58.438 [2024-07-15 12:59:50.296153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.438 [2024-07-15 12:59:50.296172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.296335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.296353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.296453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.296472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.296638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.296656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.296947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.296966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.297839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.297858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.298158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.298340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.298530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.298722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.298830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.298997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.299878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.299896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.300828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.300846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.301034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.301224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.301242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.301509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.301528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.301705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.301723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.301841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.301859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.302780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.302798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.303002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.303020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.303121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.303138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.303321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.303340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.303504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.303523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.439 [2024-07-15 12:59:50.303618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.439 [2024-07-15 12:59:50.303636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.439 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.303733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.303752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.303855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.303875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.303996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.304924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.304942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.305878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.305983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.306889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.306908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.307085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.307104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.307227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.307244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.307359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.307378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.307471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.307488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.440 [2024-07-15 12:59:50.307600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.440 [2024-07-15 12:59:50.307619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.440 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.307726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.307744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.307977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.307996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.308910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.308928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.309109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.309126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.309314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.309334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.309449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.309466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.441 [2024-07-15 12:59:50.309596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.441 [2024-07-15 12:59:50.309614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.441 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.309766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.309788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.309951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.309970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.310879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.310896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.311929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.311948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.312892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.312911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.313087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.313106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.313269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.313288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.721 qpair failed and we were unable to recover it. 00:29:58.721 [2024-07-15 12:59:50.313572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 12:59:50.313592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.313678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.313696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.313808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.313827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.314946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.314964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.315227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.315246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.315508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.315527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.315617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.315635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.315757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.315774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.315954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.315972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.316923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.316942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.317150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.317169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.317399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.317418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.317592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.317610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.317849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.317868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.318914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.318932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.319164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.319181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.319295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.319314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.319546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.319565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.319743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.319761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.319947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.319964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.320126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.320143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.320361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.320380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.320563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.320582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.722 qpair failed and we were unable to recover it. 00:29:58.722 [2024-07-15 12:59:50.320838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 12:59:50.320857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.321887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.321905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.322936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.322954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.323947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.323964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.324882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.324899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.325130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.325148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.325379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.325398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.325493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.325512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.325691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.325709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.325826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.325844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.326047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.326066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.326167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.326185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.326362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.326380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.326640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.326659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.326825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.326843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.327003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.327021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.327112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.327129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.327242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 12:59:50.327269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.723 qpair failed and we were unable to recover it. 00:29:58.723 [2024-07-15 12:59:50.327429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.327447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.327537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.327555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.327713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.327730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.327837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.327854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.327960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.327978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.328936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.328953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.329855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.329873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.330094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.330225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.330366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.330573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.330729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.330989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.331945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.331962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.332910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.332928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.333118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.333237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.333378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.333490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.724 [2024-07-15 12:59:50.333603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.724 qpair failed and we were unable to recover it. 00:29:58.724 [2024-07-15 12:59:50.333769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.333787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.333893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.333910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.334873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.334891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.335850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.335867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.336847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.336865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.337759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.337777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.725 [2024-07-15 12:59:50.338746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.725 qpair failed and we were unable to recover it. 00:29:58.725 [2024-07-15 12:59:50.338918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.338936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.339954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.339971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.340924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.340941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.341905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.341922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.342979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.342997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.343168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.343186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.343346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.343364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.343458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.343475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.343677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.343695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.343862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.343880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.344768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.344785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.345057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.345076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.345247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.345274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.726 [2024-07-15 12:59:50.345390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.726 [2024-07-15 12:59:50.345410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.726 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.345572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.345590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.345757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.345775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.345950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.345969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.346946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.346964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.347214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.347232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.347376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.347395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.347561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.347579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.347669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.347687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.347898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.347915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.348017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.348035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.348288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.348307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.348416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.348434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.348611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.348630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.348801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.348819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.349949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.349967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.350947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.350964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.351939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.351958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.352065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.352083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.352168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.352185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.352303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.352321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.727 [2024-07-15 12:59:50.352529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.727 [2024-07-15 12:59:50.352547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.727 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.352660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.352679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.352853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.352872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.352977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.353976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.353997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.354962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.354980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.355163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.355181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.355415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.355433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.355613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.355630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.355734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.355753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.355922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.355940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.356911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.357901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.357917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.358086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.358103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.358276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.358295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.358397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.358415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.358706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.358725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.358847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.728 [2024-07-15 12:59:50.358864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.728 qpair failed and we were unable to recover it. 00:29:58.728 [2024-07-15 12:59:50.359042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.359223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.359418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.359559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.359751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.359961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.359980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.360166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.360183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.360417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.360436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.360631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.360649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.360765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.360888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.360906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.361078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.361096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.361190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.361207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.361410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.361428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.361604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.361622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.729 [2024-07-15 12:59:50.361854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.361872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:58.729 [2024-07-15 12:59:50.362058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.362076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.362249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.362275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.729 [2024-07-15 12:59:50.362443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.362462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.729 [2024-07-15 12:59:50.362643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.362661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.729 [2024-07-15 12:59:50.362928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.362946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.363134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.363152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.363327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.363346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.363596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.363614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.363724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.363742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.363910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.364924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.364942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.365120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.365138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.365318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.365338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.365519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.365537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.365655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.365672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.729 [2024-07-15 12:59:50.365852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.729 [2024-07-15 12:59:50.365870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.729 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.366873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.366894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.367975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.367992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.368933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.368951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.369952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.369970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.370835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.370853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.371030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.371049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.371215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.371365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.371383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.730 [2024-07-15 12:59:50.371591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.730 [2024-07-15 12:59:50.371610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.730 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.371835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.371853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.371976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.371994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.372926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.372945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.373963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.373984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.374073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.374091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.374198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.374216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.374405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.374425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.374628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.374647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.374834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.374853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.375903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.375994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.376954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.376972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.377055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.377073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.377198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.731 [2024-07-15 12:59:50.377216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.731 qpair failed and we were unable to recover it. 00:29:58.731 [2024-07-15 12:59:50.377323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.377342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.377433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.377451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.377572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.377591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.377711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.377729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.377903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.377920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.378896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.378913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.379859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.379877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.380957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.380974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.381898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.381917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.382034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.382146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.382354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.382479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.732 [2024-07-15 12:59:50.382595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.732 qpair failed and we were unable to recover it. 00:29:58.732 [2024-07-15 12:59:50.382760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.382778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.382869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.382887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.382991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.383847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.383865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.384934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.384952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.385926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.385948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.733 qpair failed and we were unable to recover it. 00:29:58.733 [2024-07-15 12:59:50.386942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.733 [2024-07-15 12:59:50.386960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.387884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.387987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.388108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.388290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.388478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.388678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.388877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.388894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.389966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.389984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.390966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.390984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.391957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.391974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.734 qpair failed and we were unable to recover it. 00:29:58.734 [2024-07-15 12:59:50.392072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.734 [2024-07-15 12:59:50.392091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.392211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.392232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.392430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.392449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.392569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.392587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.392681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.392698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.392895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.392913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.393972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.393991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.394964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.394981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.735 qpair failed and we were unable to recover it. 00:29:58.735 [2024-07-15 12:59:50.395932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.735 [2024-07-15 12:59:50.395951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.396828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.396845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.397776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.736 [2024-07-15 12:59:50.397893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.397920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.398090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.398226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.736 [2024-07-15 12:59:50.398438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.398569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.736 [2024-07-15 12:59:50.398752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.398876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.398893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.736 [2024-07-15 12:59:50.399051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.399844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.399862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.400956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.400973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.401066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.401085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.736 qpair failed and we were unable to recover it. 00:29:58.736 [2024-07-15 12:59:50.401163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.736 [2024-07-15 12:59:50.401180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.401291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.401309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.401474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.401495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.401659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.401677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.401775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.401793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.401894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.401913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.402027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.402250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.402403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.402619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.402887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.402985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.403948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.403966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.404924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.404942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.405822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.405989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.406007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.406170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.406187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.737 [2024-07-15 12:59:50.406292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.737 [2024-07-15 12:59:50.406311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.737 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.406415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.406433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.406527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.406545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.406726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.406744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.406917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.406935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.407967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.407985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.408978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.408996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.409895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.409913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.410928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.410945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.411277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.411476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.411606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.738 [2024-07-15 12:59:50.411746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.738 qpair failed and we were unable to recover it. 00:29:58.738 [2024-07-15 12:59:50.411842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.411859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.411961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.411979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.412918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.412936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.413889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.413991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.414884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.414902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.739 qpair failed and we were unable to recover it. 00:29:58.739 [2024-07-15 12:59:50.415923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.739 [2024-07-15 12:59:50.415943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.416881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.416986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.417883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.417901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.418941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.418959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.740 qpair failed and we were unable to recover it. 00:29:58.740 [2024-07-15 12:59:50.419925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.740 [2024-07-15 12:59:50.419943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.420903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.420920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.421736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.421754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.422950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.422967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.423968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.423986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.424881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.424898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.425086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.425105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.425215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.425233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 Malloc0 00:29:58.741 [2024-07-15 12:59:50.425375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.425394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.425507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.425525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.741 [2024-07-15 12:59:50.425629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.741 [2024-07-15 12:59:50.425647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.741 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.425745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.425762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.425852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.425869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.742 [2024-07-15 12:59:50.426043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.426156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.426349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b9 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.742 0 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.426492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.426606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.742 [2024-07-15 12:59:50.426786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.742 [2024-07-15 12:59:50.426943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.426963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.427894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.427915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.428898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.428916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.429936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.429953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.430951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.430969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.431135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.431153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.431273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.742 [2024-07-15 12:59:50.431292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-07-15 12:59:50.431383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.431400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.431528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.431546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.431642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.431660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.431764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.431781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.431872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.431889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.431994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.432857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.432875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.743 [2024-07-15 12:59:50.433334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.433975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.433993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.434927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.434945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.435951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.435970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.436888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.436905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.437004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.437022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-07-15 12:59:50.437123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.743 [2024-07-15 12:59:50.437140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.437921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.437938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.438905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.438996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.439268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.439491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.439611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.439777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.439912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.439929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.440976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.440994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.441938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.441956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.744 [2024-07-15 12:59:50.442072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.442089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 [2024-07-15 12:59:50.442208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.744 [2024-07-15 12:59:50.442227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.744 qpair failed and we were unable to recover it. 00:29:58.744 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.744 [2024-07-15 12:59:50.442433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.442452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.442564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.442582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.442685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.442704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.745 [2024-07-15 12:59:50.442804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.442821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.442931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.442949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.745 [2024-07-15 12:59:50.443048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.443298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.443424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.443548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.443674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.443863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.443881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.444054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.444072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.444244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.444270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.444398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.444415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.444630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.444648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.444824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.444841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.445962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.445979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.446835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.446853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.447807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.447995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.448013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.448110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.448128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.448303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.448321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.448482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.448499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.448594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.745 [2024-07-15 12:59:50.448612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.745 qpair failed and we were unable to recover it. 00:29:58.745 [2024-07-15 12:59:50.448711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.448729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.448839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.448857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.449910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.449927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.450936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.450954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.451831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.451849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.452980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.452998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.453885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.453903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.746 [2024-07-15 12:59:50.454105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.454223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.454352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.746 [2024-07-15 12:59:50.454506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.454634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.746 [2024-07-15 12:59:50.454750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.746 [2024-07-15 12:59:50.454932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.746 [2024-07-15 12:59:50.454949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.746 qpair failed and we were unable to recover it. 00:29:58.747 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.747 [2024-07-15 12:59:50.455060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.455954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.455973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.456854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.456872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.457796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.457814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.458977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.458996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.459965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.459982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.460937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.460955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.461115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.461132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.747 qpair failed and we were unable to recover it. 00:29:58.747 [2024-07-15 12:59:50.461295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.747 [2024-07-15 12:59:50.461314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.461420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.461437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.461608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.461625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.461795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.461816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.461932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.461949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.462053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.462262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.462382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.748 [2024-07-15 12:59:50.462590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.462712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.748 [2024-07-15 12:59:50.462828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.462955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.462972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.748 [2024-07-15 12:59:50.463064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.463082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.463293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.463312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.463551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.463569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.463674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.463692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.463818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.463837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.464971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.464988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.465182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.465315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.465452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.465566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.748 [2024-07-15 12:59:50.465693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f75d8000b90 with addr=10.0.0.2, port=4420 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 [2024-07-15 12:59:50.465858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.748 [2024-07-15 12:59:50.474069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.748 [2024-07-15 12:59:50.474218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.748 [2024-07-15 12:59:50.474249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.748 [2024-07-15 12:59:50.474288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.748 [2024-07-15 12:59:50.474300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.748 [2024-07-15 12:59:50.474336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.748 qpair failed and we were unable to recover it. 00:29:58.748 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 12:59:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4116849 00:29:58.749 [2024-07-15 12:59:50.484092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.484214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.484240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.484253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.484274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.484300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.494079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.494213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.494240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.494252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.494273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.494299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.504277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.504414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.504445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.504458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.504469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.504495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.514081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.514196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.514221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.514234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.514244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.514280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.524087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.524193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.524217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.524230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.524241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.524273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.534152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.534273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.534299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.534311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.534322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.534347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.544393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.544604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.544630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.544642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.544654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.544684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.554161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.554271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.554296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.554308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.554319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.554344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.564154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.564250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.564281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.564294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.564306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.564331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.574207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.574332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.574358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.574371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.574381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.574408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.584488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.584637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.584663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.584675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.584686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.584712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.594374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.594502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.594533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.594544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.594556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.594581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.604377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.604471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.604496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.604508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.604519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.604544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.614402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.614523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.749 [2024-07-15 12:59:50.614549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.749 [2024-07-15 12:59:50.614561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.749 [2024-07-15 12:59:50.614572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.749 [2024-07-15 12:59:50.614597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.749 qpair failed and we were unable to recover it. 00:29:58.749 [2024-07-15 12:59:50.624630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.749 [2024-07-15 12:59:50.624772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.750 [2024-07-15 12:59:50.624800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.750 [2024-07-15 12:59:50.624812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.750 [2024-07-15 12:59:50.624823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:29:58.750 [2024-07-15 12:59:50.624849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.750 qpair failed and we were unable to recover it. 00:29:58.750 [2024-07-15 12:59:50.634511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.750 [2024-07-15 12:59:50.634660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.750 [2024-07-15 12:59:50.634718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.750 [2024-07-15 12:59:50.634745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.750 [2024-07-15 12:59:50.634776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:58.750 [2024-07-15 12:59:50.634822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.750 qpair failed and we were unable to recover it. 00:29:59.009 [2024-07-15 12:59:50.644511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.009 [2024-07-15 12:59:50.644634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.009 [2024-07-15 12:59:50.644664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.009 [2024-07-15 12:59:50.644679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.009 [2024-07-15 12:59:50.644692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.009 [2024-07-15 12:59:50.644722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.009 qpair failed and we were unable to recover it. 00:29:59.009 [2024-07-15 12:59:50.654539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.009 [2024-07-15 12:59:50.654656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.009 [2024-07-15 12:59:50.654679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.009 [2024-07-15 12:59:50.654689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.009 [2024-07-15 12:59:50.654698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.009 [2024-07-15 12:59:50.654718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.009 qpair failed and we were unable to recover it. 00:29:59.009 [2024-07-15 12:59:50.664762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.009 [2024-07-15 12:59:50.664880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.009 [2024-07-15 12:59:50.664900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.664910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.664918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.664938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.674670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.674802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.674824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.674833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.674842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.674861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.684803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.684932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.684953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.684963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.684971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.684990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.694698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.694821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.694842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.694852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.694861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.694880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.704991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.705112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.705132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.705142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.705151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.705170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.714797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.714890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.714910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.714920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.714928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.714947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.724789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.724914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.724936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.724949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.724958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.724978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.734825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.734917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.734938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.734948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.734957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.734977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.745038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.745180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.745201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.745210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.745219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.745239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.754895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.755005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.755024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.755033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.755042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.755061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.764944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.765029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.765049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.765058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.765067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.765085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.774982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.775073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.775093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.775103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.775111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.775130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.785208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.785342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.785363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.785373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.785381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.785400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.795062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.795166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.795185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.795195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.795203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.010 [2024-07-15 12:59:50.795222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.010 qpair failed and we were unable to recover it. 00:29:59.010 [2024-07-15 12:59:50.805015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.010 [2024-07-15 12:59:50.805126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.010 [2024-07-15 12:59:50.805146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.010 [2024-07-15 12:59:50.805155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.010 [2024-07-15 12:59:50.805164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.805184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.815056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.815146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.815165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.815179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.815187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.815206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.825288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.825403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.825422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.825432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.825440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.825459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.835112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.835206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.835226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.835235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.835244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.835267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.845199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.845291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.845310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.845320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.845328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.845346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.855160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.855246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.855269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.855279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.855287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.855307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.865402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.865525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.865552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.865562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.865571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.865590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.875238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.875360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.875381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.875391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.875400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.875421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.885197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.885320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.885341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.885351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.885360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.885380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.895299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.895401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.895421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.895430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.895439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.895458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.905558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.905673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.905692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.905705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.905714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.905733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.915448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.915568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.915587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.915597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.915606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.915625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.925427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.925528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.925548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.925557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.925566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.925585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.935449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.935545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.935564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.935573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.935581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.935600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.011 [2024-07-15 12:59:50.945710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.011 [2024-07-15 12:59:50.945829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.011 [2024-07-15 12:59:50.945849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.011 [2024-07-15 12:59:50.945858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.011 [2024-07-15 12:59:50.945867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.011 [2024-07-15 12:59:50.945886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.011 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:50.955524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:50.955613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:50.955633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:50.955642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:50.955651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:50.955669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:50.965550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:50.965649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:50.965667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:50.965677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:50.965685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:50.965704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:50.975569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:50.975677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:50.975696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:50.975706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:50.975715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:50.975733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:50.985888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:50.986003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:50.986023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:50.986032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:50.986040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:50.986059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:50.995649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:50.995742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:50.995765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:50.995774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:50.995783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:50.995802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:51.005698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:51.005804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:51.005824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:51.005833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.271 [2024-07-15 12:59:51.005842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.271 [2024-07-15 12:59:51.005861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.271 qpair failed and we were unable to recover it. 00:29:59.271 [2024-07-15 12:59:51.015666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.271 [2024-07-15 12:59:51.015752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.271 [2024-07-15 12:59:51.015772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.271 [2024-07-15 12:59:51.015782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.015790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.015809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.025915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.026029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.026048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.026059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.026067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.026086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.035787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.035909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.035937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.035947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.035955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.035975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.045819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.045902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.045921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.045930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.045938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.045957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.055870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.055952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.055971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.055980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.055988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.056007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.066163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.066295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.066315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.066324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.066333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.066352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.075983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.076081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.076100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.076110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.076118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.076136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.086018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.086104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.086127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.086137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.086145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.086165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.096051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.096171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.096191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.096200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.096209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.096228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.106204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.106357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.106379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.106389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.106398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.106417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.116075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.116174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.116194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.116203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.116211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.116230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.126066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.126161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.126180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.126189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.126198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.126220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.136172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.136300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.136326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.136335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.136344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.136362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.146347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.146468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.146488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.146499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.146507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.146526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.156186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.272 [2024-07-15 12:59:51.156290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.272 [2024-07-15 12:59:51.156309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.272 [2024-07-15 12:59:51.156318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.272 [2024-07-15 12:59:51.156326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.272 [2024-07-15 12:59:51.156344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.272 qpair failed and we were unable to recover it. 00:29:59.272 [2024-07-15 12:59:51.166228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.273 [2024-07-15 12:59:51.166358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.273 [2024-07-15 12:59:51.166380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.273 [2024-07-15 12:59:51.166389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.273 [2024-07-15 12:59:51.166397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.273 [2024-07-15 12:59:51.166417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.273 qpair failed and we were unable to recover it. 00:29:59.273 [2024-07-15 12:59:51.176188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.273 [2024-07-15 12:59:51.176321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.273 [2024-07-15 12:59:51.176345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.273 [2024-07-15 12:59:51.176355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.273 [2024-07-15 12:59:51.176363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.273 [2024-07-15 12:59:51.176383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.273 qpair failed and we were unable to recover it. 00:29:59.273 [2024-07-15 12:59:51.186471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.273 [2024-07-15 12:59:51.186586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.273 [2024-07-15 12:59:51.186605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.273 [2024-07-15 12:59:51.186614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.273 [2024-07-15 12:59:51.186623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.273 [2024-07-15 12:59:51.186642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.273 qpair failed and we were unable to recover it. 00:29:59.273 [2024-07-15 12:59:51.196350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.273 [2024-07-15 12:59:51.196445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.273 [2024-07-15 12:59:51.196464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.273 [2024-07-15 12:59:51.196473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.273 [2024-07-15 12:59:51.196481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.273 [2024-07-15 12:59:51.196500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.273 qpair failed and we were unable to recover it. 00:29:59.273 [2024-07-15 12:59:51.206264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.273 [2024-07-15 12:59:51.206382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.273 [2024-07-15 12:59:51.206400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.273 [2024-07-15 12:59:51.206410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.273 [2024-07-15 12:59:51.206419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.273 [2024-07-15 12:59:51.206439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.273 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.216366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.216475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.216496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.216506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.216514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.216537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.226623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.226763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.226784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.226793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.226802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.226821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.236494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.236598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.236617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.236626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.236634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.236653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.246478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.246592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.246611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.246620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.246629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.246648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.256489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.256587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.256607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.256616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.256624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.256643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.266728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.266876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.266900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.266910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.266918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.266937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.276581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.276676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.276695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.276704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.276712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.276731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.286601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.286717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.532 [2024-07-15 12:59:51.286744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.532 [2024-07-15 12:59:51.286753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.532 [2024-07-15 12:59:51.286762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.532 [2024-07-15 12:59:51.286780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.532 qpair failed and we were unable to recover it. 00:29:59.532 [2024-07-15 12:59:51.296621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.532 [2024-07-15 12:59:51.296706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.296725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.296734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.296742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.296760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.306869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.306986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.307005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.307014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.307027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.307046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.316727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.316823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.316842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.316851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.316860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.316878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.326771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.326877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.326896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.326906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.326914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.326933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.336755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.336847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.336867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.336876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.336884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.336904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.347023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.347142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.347161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.347171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.347179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.347198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.356836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.356939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.356958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.356968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.356977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.356995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.366876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.366965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.366984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.366993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.367001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.367020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.376944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.377029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.377050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.377059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.377068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.377087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.387182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.387316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.387337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.387347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.387355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.387374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.396958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.397057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.397076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.397085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.397098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.397117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.406983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.407076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.407095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.407105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.407113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.407132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.417023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.417107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.417126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.417136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.417144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.417162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.427271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.427389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.427408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.427417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.427426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.533 [2024-07-15 12:59:51.427445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.533 qpair failed and we were unable to recover it. 00:29:59.533 [2024-07-15 12:59:51.437108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.533 [2024-07-15 12:59:51.437212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.533 [2024-07-15 12:59:51.437231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.533 [2024-07-15 12:59:51.437241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.533 [2024-07-15 12:59:51.437249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.534 [2024-07-15 12:59:51.437275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.534 qpair failed and we were unable to recover it. 00:29:59.534 [2024-07-15 12:59:51.447090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.534 [2024-07-15 12:59:51.447213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.534 [2024-07-15 12:59:51.447234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.534 [2024-07-15 12:59:51.447244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.534 [2024-07-15 12:59:51.447253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.534 [2024-07-15 12:59:51.447278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.534 qpair failed and we were unable to recover it. 00:29:59.534 [2024-07-15 12:59:51.457143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.534 [2024-07-15 12:59:51.457229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.534 [2024-07-15 12:59:51.457248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.534 [2024-07-15 12:59:51.457266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.534 [2024-07-15 12:59:51.457275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.534 [2024-07-15 12:59:51.457294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.534 qpair failed and we were unable to recover it. 00:29:59.534 [2024-07-15 12:59:51.467458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.534 [2024-07-15 12:59:51.467575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.534 [2024-07-15 12:59:51.467594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.534 [2024-07-15 12:59:51.467604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.534 [2024-07-15 12:59:51.467613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.534 [2024-07-15 12:59:51.467632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.534 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.477243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.477352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.477372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.477382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.477390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.477409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.487273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.487360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.487380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.487389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.487402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.487421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.497269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.497364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.497383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.497393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.497402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.497420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.507558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.507679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.507699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.507709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.507717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.507737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.517305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.517404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.517424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.517433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.517442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.517461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.527481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.527583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.527603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.527612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.527621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.527640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.537428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.537513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.537533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.537542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.537551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.537569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.547626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.547742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.547764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.547775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.547784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.547804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.557549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.557644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.557665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.794 [2024-07-15 12:59:51.557674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.794 [2024-07-15 12:59:51.557682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.794 [2024-07-15 12:59:51.557701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.794 qpair failed and we were unable to recover it. 00:29:59.794 [2024-07-15 12:59:51.567502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.794 [2024-07-15 12:59:51.567594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.794 [2024-07-15 12:59:51.567614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.567624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.567632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.567651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.577599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.577683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.577704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.577717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.577726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.577744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.587751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.587868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.587888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.587897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.587906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.587926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.597657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.597754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.597773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.597783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.597791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.597810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.607658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.607745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.607764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.607774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.607782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.607801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.617637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.617731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.617750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.617760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.617768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.617787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.627932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.628044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.628064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.628073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.628082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.628100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.637772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.637868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.637888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.637897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.637905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.637924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.647820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.647913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.647932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.647942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.647950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.647969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.657833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.657918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.657938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.657947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.657955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.657974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.668069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.668188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.668207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.668221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.668229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.668248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.677913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.678038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.678058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.678067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.678076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.678094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.687931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.795 [2024-07-15 12:59:51.688026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.795 [2024-07-15 12:59:51.688045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.795 [2024-07-15 12:59:51.688054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.795 [2024-07-15 12:59:51.688063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.795 [2024-07-15 12:59:51.688082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.795 qpair failed and we were unable to recover it. 00:29:59.795 [2024-07-15 12:59:51.697997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.796 [2024-07-15 12:59:51.698091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.796 [2024-07-15 12:59:51.698111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.796 [2024-07-15 12:59:51.698120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.796 [2024-07-15 12:59:51.698128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.796 [2024-07-15 12:59:51.698147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.796 qpair failed and we were unable to recover it. 00:29:59.796 [2024-07-15 12:59:51.708243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.796 [2024-07-15 12:59:51.708373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.796 [2024-07-15 12:59:51.708393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.796 [2024-07-15 12:59:51.708403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.796 [2024-07-15 12:59:51.708412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.796 [2024-07-15 12:59:51.708431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.796 qpair failed and we were unable to recover it. 00:29:59.796 [2024-07-15 12:59:51.718113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.796 [2024-07-15 12:59:51.718260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.796 [2024-07-15 12:59:51.718281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.796 [2024-07-15 12:59:51.718290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.796 [2024-07-15 12:59:51.718299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.796 [2024-07-15 12:59:51.718318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.796 qpair failed and we were unable to recover it. 00:29:59.796 [2024-07-15 12:59:51.728082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.796 [2024-07-15 12:59:51.728175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.796 [2024-07-15 12:59:51.728194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.796 [2024-07-15 12:59:51.728203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.796 [2024-07-15 12:59:51.728211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:29:59.796 [2024-07-15 12:59:51.728230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:59.796 qpair failed and we were unable to recover it. 00:30:00.056 [2024-07-15 12:59:51.738074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.056 [2024-07-15 12:59:51.738163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.056 [2024-07-15 12:59:51.738182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.056 [2024-07-15 12:59:51.738192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.056 [2024-07-15 12:59:51.738200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.056 [2024-07-15 12:59:51.738219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.056 qpair failed and we were unable to recover it. 00:30:00.056 [2024-07-15 12:59:51.748351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.056 [2024-07-15 12:59:51.748506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.056 [2024-07-15 12:59:51.748528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.056 [2024-07-15 12:59:51.748538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.056 [2024-07-15 12:59:51.748547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.056 [2024-07-15 12:59:51.748567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.056 qpair failed and we were unable to recover it. 00:30:00.056 [2024-07-15 12:59:51.758204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.758317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.758341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.758352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.758360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.758380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.768207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.768315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.768335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.768346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.768355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.768375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.778307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.778436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.778456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.778466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.778475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.778494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.788416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.788577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.788598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.788607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.788616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.788634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.798262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.798380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.798399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.798409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.798417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.798436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.808328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.808420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.808440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.808449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.808458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.808478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.818371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.818473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.818493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.818503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.818511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.818530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.828614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.828801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.828822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.828832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.828840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.828861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.838476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.838578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.838598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.838607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.838616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.838635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.848478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.848562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.848585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.848595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.848603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.848621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.858536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.858630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.858650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.858659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.858668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.858687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.868792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.868923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.868944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.868954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.868963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.868981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.878531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.057 [2024-07-15 12:59:51.878627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.057 [2024-07-15 12:59:51.878647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.057 [2024-07-15 12:59:51.878656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.057 [2024-07-15 12:59:51.878664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.057 [2024-07-15 12:59:51.878683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.057 qpair failed and we were unable to recover it. 00:30:00.057 [2024-07-15 12:59:51.888617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.888725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.888745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.888755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.888764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.888787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.898585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.898699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.898718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.898728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.898737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.898756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.908827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.908953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.908974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.908984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.908993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.909012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.918746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.918848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.918868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.918877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.918886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.918905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.928766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.928860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.928879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.928889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.928897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.928916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.938815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.938908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.938931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.938940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.938948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.938967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.949051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.949164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.949185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.949194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.949203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.949222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.958872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.958995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.959014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.959024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.959032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.959053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.968868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.968961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.968980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.968989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.968998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.969016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.978940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.979036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.979055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.979064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.979073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.979095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.058 [2024-07-15 12:59:51.989164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.058 [2024-07-15 12:59:51.989301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.058 [2024-07-15 12:59:51.989328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.058 [2024-07-15 12:59:51.989337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.058 [2024-07-15 12:59:51.989346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.058 [2024-07-15 12:59:51.989365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.058 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:51.998995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:51.999117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:51.999144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.319 [2024-07-15 12:59:51.999154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.319 [2024-07-15 12:59:51.999163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.319 [2024-07-15 12:59:51.999183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.319 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:52.009046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:52.009137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:52.009157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.319 [2024-07-15 12:59:52.009167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.319 [2024-07-15 12:59:52.009175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.319 [2024-07-15 12:59:52.009195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.319 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:52.019059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:52.019163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:52.019182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.319 [2024-07-15 12:59:52.019192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.319 [2024-07-15 12:59:52.019200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.319 [2024-07-15 12:59:52.019219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.319 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:52.029264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:52.029378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:52.029401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.319 [2024-07-15 12:59:52.029411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.319 [2024-07-15 12:59:52.029419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.319 [2024-07-15 12:59:52.029440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.319 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:52.039036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:52.039125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:52.039145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.319 [2024-07-15 12:59:52.039154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.319 [2024-07-15 12:59:52.039162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.319 [2024-07-15 12:59:52.039181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.319 qpair failed and we were unable to recover it. 00:30:00.319 [2024-07-15 12:59:52.049134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.319 [2024-07-15 12:59:52.049218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.319 [2024-07-15 12:59:52.049237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.049247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.049263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.049282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.059132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.059241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.059266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.059275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.059284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.059303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.069378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.069517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.069538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.069548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.069560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.069580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.079275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.079374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.079394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.079404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.079412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.079431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.089294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.089432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.089453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.089462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.089471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.089490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.099233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.099358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.099385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.099395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.099404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.099423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.109554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.109671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.109690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.109700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.109708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.109728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.119435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.119548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.119568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.119578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.119586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.119606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.129374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.129495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.129517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.129526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.129535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.129555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.139459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.139554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.139574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.139583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.139591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.139610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.149677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.149795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.149814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.149823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.149832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.149850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.159511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.159670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.159690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.159700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.159712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.159732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.169499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.169607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.169626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.169636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.169645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.169663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.179634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.179733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.179752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.179761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.179770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.179788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.189796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.189954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.189982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.189991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.189999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.190018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.199658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.199767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.199786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.199795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.199805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.199824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.209720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.209811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.209831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.209841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.209850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.209869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.219716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.219842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.219864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.219873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.219882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.219900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.229947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.230062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.230081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.230091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.230099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.230118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.239764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.239860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.239879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.239888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.239897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.239916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.320 [2024-07-15 12:59:52.249848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.320 [2024-07-15 12:59:52.249935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.320 [2024-07-15 12:59:52.249955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.320 [2024-07-15 12:59:52.249965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.320 [2024-07-15 12:59:52.249978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.320 [2024-07-15 12:59:52.249996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.320 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.259832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.259923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.259942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.259951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.259960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.259978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.270117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.270235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.270262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.270273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.270282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.270302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.279893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.280032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.280052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.280061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.280070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.280089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.289920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.290053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.290072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.290082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.290091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.290110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.300010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.300104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.300124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.300133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.300141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.300160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.310202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.310332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.310354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.310363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.310372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.310390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.320068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.320160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.320179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.320189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.320197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.320216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.330132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.330223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.330242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.330251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.330265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.330284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.340112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.340209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.340228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.340242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.340250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.340275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.350363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.350484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.350503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.350513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.350521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.350540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.360222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.360357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.360387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.360397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.360406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.360427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.370283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.370416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.370436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.370446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.370454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.370473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.380243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.380369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.380388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.380398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.380406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.380427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.390520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.390706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.390727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.390736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.390745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.390763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.400356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.400458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.400477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.400486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.581 [2024-07-15 12:59:52.400494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.581 [2024-07-15 12:59:52.400513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.581 qpair failed and we were unable to recover it. 00:30:00.581 [2024-07-15 12:59:52.410392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.581 [2024-07-15 12:59:52.410478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.581 [2024-07-15 12:59:52.410499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.581 [2024-07-15 12:59:52.410508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.410516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.410535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.420390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.420510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.420530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.420540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.420548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.420568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.430636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.430759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.430778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.430791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.430800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.430819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.440525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.440647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.440666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.440675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.440683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.440703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.450518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.450650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.450671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.450682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.450690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.450710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.460475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.460597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.460618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.460628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.460637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.460656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.470771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.470908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.470929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.470938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.470947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.470965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.480577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.480708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.480729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.480738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.480747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.480766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.490598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.490691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.490711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.490720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.490728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.490748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.500704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.500794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.500813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.500823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.500831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.500849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.582 [2024-07-15 12:59:52.510909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.582 [2024-07-15 12:59:52.511025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.582 [2024-07-15 12:59:52.511045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.582 [2024-07-15 12:59:52.511054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.582 [2024-07-15 12:59:52.511063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.582 [2024-07-15 12:59:52.511082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.582 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.520670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.520766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.520786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.520798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.520807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.520826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.530777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.530867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.530887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.530896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.530904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.530923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.540734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.540819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.540838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.540848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.540856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.540875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.551020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.551139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.551169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.551179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.551189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.551209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.560828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.560924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.560945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.560954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.560963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.560982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.570874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.570960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.570980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.570990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.570998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.571017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.580912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.581004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.581024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.581033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.581041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.581060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.591133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.591248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.591273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.591283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.591291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.591310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.601005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.601109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.601128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.601137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.601145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.601165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.610997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.611099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.611123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.611133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.611142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.611161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.621098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.621185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.621206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.621215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.621223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.621243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.631298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.631428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.631450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.631460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.631469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.631488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.641100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.641344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.641366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.641376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.641385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.641405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.651173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.651304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.651324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.651334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.651342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.651367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.661178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.661264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.661283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.661292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.661300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.661319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.671404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.671542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.671563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.671572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.671581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.671601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.681219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.681319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.681338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.681347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.681356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.681376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.691441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.691554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.691573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.691583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.691591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.691610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.701400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.842 [2024-07-15 12:59:52.701494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.842 [2024-07-15 12:59:52.701517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.842 [2024-07-15 12:59:52.701527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.842 [2024-07-15 12:59:52.701536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.842 [2024-07-15 12:59:52.701554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.842 qpair failed and we were unable to recover it. 00:30:00.842 [2024-07-15 12:59:52.711657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.711776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.711796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.711806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.711815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.711834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.721415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.721508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.721528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.721537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.721546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.721565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.731363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.731475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.731495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.731504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.731513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.731533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.741456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.741574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.741599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.741609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.741617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.741641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.751639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.751759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.751779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.751789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.751798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.751816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.761536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.761626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.761645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.761654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.761663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.761682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:00.843 [2024-07-15 12:59:52.771592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.843 [2024-07-15 12:59:52.771685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.843 [2024-07-15 12:59:52.771704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.843 [2024-07-15 12:59:52.771714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.843 [2024-07-15 12:59:52.771722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:00.843 [2024-07-15 12:59:52.771741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.843 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.781543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.781628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.781648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.781658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.781666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.781684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.791791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.791911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.791942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.791952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.791961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.791980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.801563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.801688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.801709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.801718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.801727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.801745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.811644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.811732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.811752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.811761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.811770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.811788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.821681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.821768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.821788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.821797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.821806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.821824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.831947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.832089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.832110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.832119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.832128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.832151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.841727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.841827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.841847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.841856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.841864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.841883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.851719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.851816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.851834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.851843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.851851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.851870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.861801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.861921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.861942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.861951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.861960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.861979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.872090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.872225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.872246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.872261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.872270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.872289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.881890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.881983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.882006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.882016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.882024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.882044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.891902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.891987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.892006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.892016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.892024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.892043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.902000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.902085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.902104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.902114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.902122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.902141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.912162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.103 [2024-07-15 12:59:52.912284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.103 [2024-07-15 12:59:52.912304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.103 [2024-07-15 12:59:52.912313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.103 [2024-07-15 12:59:52.912321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.103 [2024-07-15 12:59:52.912341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.103 qpair failed and we were unable to recover it. 00:30:01.103 [2024-07-15 12:59:52.922017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.922108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.922128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.922138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.922149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.922168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.932037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.932130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.932149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.932158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.932167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.932185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.942115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.942200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.942220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.942230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.942238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.942261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.952338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.952469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.952490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.952500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.952508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.952527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.962133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.962228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.962247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.962261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.962270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.962288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.972192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.972297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.972317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.972327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.972336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.972355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.982213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.982307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.982326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.982335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.982343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.982365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:52.992496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:52.992616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:52.992635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:52.992645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:52.992653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:52.992672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:53.002208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:53.002312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:53.002331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:53.002341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:53.002349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:53.002368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:53.012267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:53.012367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:53.012387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:53.012396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:53.012409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:53.012428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:53.022312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:53.022424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:53.022444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:53.022454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:53.022462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:53.022482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.104 [2024-07-15 12:59:53.032539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.104 [2024-07-15 12:59:53.032654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.104 [2024-07-15 12:59:53.032674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.104 [2024-07-15 12:59:53.032684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.104 [2024-07-15 12:59:53.032693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.104 [2024-07-15 12:59:53.032712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.104 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.042408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.042510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.042530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.042540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.042548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.042567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.052439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.052548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.052568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.052578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.052587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.052607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.062466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.062590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.062610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.062620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.062629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.062651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.072736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.072849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.072869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.072878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.072887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.072906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.082593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.082685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.082704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.082714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.082722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.082740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.092604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.092707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.092726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.092735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.092744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.092763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.102520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.102602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.102622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.365 [2024-07-15 12:59:53.102635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.365 [2024-07-15 12:59:53.102644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.365 [2024-07-15 12:59:53.102662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.365 qpair failed and we were unable to recover it. 00:30:01.365 [2024-07-15 12:59:53.112805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.365 [2024-07-15 12:59:53.112931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.365 [2024-07-15 12:59:53.112952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.112961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.112970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.112989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.122654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.122756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.122776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.122785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.122793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.122813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.132662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.132754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.132774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.132782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.132791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.132809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.142649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.142743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.142761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.142770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.142778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.142796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.152935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.153049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.153068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.153078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.153087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.153105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.162770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.162891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.162918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.162928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.162936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.162955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.172771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.172854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.172874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.172884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.172892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.172910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.182864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.182948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.182967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.182976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.182984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.183002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.193102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.193246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.193271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.193285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.193293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.193313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.202908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.203008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.203028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.203037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.203045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.203064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.212965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.213104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.213125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.213135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.213144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.213163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.223012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.223155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.223176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.223185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.223194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.366 [2024-07-15 12:59:53.223212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.366 qpair failed and we were unable to recover it. 00:30:01.366 [2024-07-15 12:59:53.233281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.366 [2024-07-15 12:59:53.233402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.366 [2024-07-15 12:59:53.233422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.366 [2024-07-15 12:59:53.233432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.366 [2024-07-15 12:59:53.233440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.233463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.243077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.243179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.243199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.243209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.243217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.243236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.253090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.253215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.253236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.253245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.253259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.253279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.263144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.263242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.263269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.263279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.263287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.263306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.273291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.273411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.273430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.273440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.273448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.273467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.283149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.283250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.283275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.283289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.283297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.283316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.293251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.293359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.293379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.293389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.293397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.293415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.367 [2024-07-15 12:59:53.303287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.367 [2024-07-15 12:59:53.303377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.367 [2024-07-15 12:59:53.303397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.367 [2024-07-15 12:59:53.303406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.367 [2024-07-15 12:59:53.303415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.367 [2024-07-15 12:59:53.303434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.367 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.313457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.313574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.313594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.313603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.313612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.313631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.323335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.323465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.323484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.323494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.323502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.323521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.333331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.333418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.333438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.333447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.333456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.333475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.343399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.343502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.343522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.343531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.343539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.343558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.353625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.353745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.353765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.353774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.353783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.353802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.363383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.363478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.363499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.363508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.363516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.363535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.373400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.373498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.373521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.373531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.373539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.373558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.383557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.383667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.383689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.383698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.383707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.628 [2024-07-15 12:59:53.383725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.628 qpair failed and we were unable to recover it. 00:30:01.628 [2024-07-15 12:59:53.393682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.628 [2024-07-15 12:59:53.393806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.628 [2024-07-15 12:59:53.393825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.628 [2024-07-15 12:59:53.393835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.628 [2024-07-15 12:59:53.393843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.393862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.403536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.403633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.403652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.403661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.403669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.403688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.413547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.413634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.413653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.413663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.413671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.413689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.423581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.423676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.423696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.423705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.423713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.423731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.433849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.433980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.434000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.434009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.434017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.434036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.443680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.443772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.443791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.443800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.443809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.443827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.453683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.453777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.453798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.453808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.453817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.453836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.463770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.463859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.463881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.463891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.463899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.463917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.473929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.474075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.474095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.474105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.474115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.474135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.483806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.483949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.483969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.483978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.483987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.484005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.493848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.493937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.493956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.493966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.493974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.493992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.503889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.503974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.503994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.504003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.504011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.504034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.514167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.514287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.514307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.514316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.514325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.514343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.523996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.524095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.524115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.524125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.524133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.524152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.533988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.629 [2024-07-15 12:59:53.534085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.629 [2024-07-15 12:59:53.534104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.629 [2024-07-15 12:59:53.534113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.629 [2024-07-15 12:59:53.534121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.629 [2024-07-15 12:59:53.534140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.629 qpair failed and we were unable to recover it. 00:30:01.629 [2024-07-15 12:59:53.544059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.630 [2024-07-15 12:59:53.544151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.630 [2024-07-15 12:59:53.544170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.630 [2024-07-15 12:59:53.544180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.630 [2024-07-15 12:59:53.544188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.630 [2024-07-15 12:59:53.544207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.630 qpair failed and we were unable to recover it. 00:30:01.630 [2024-07-15 12:59:53.554266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.630 [2024-07-15 12:59:53.554386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.630 [2024-07-15 12:59:53.554411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.630 [2024-07-15 12:59:53.554421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.630 [2024-07-15 12:59:53.554430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.630 [2024-07-15 12:59:53.554450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.630 qpair failed and we were unable to recover it. 00:30:01.630 [2024-07-15 12:59:53.564062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.630 [2024-07-15 12:59:53.564156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.630 [2024-07-15 12:59:53.564177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.630 [2024-07-15 12:59:53.564186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.630 [2024-07-15 12:59:53.564195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.630 [2024-07-15 12:59:53.564214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.630 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.574134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.574231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.574251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.574266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.574275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.574295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.584183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.584283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.584302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.584312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.584321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.584340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.594374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.594526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.594546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.594556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.594564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.594587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.604226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.604334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.604356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.604365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.604374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.604392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.614209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.614303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.614323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.614333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.614341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.614359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.624307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.624418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.624438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.624448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.624456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.624476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.634556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.634668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.634687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.634697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.634706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.634725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.644332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.644448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.644471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.644481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.644489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.644509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.654376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.654470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.654489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.654498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.654506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.654525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.664452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.664534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.890 [2024-07-15 12:59:53.664555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.890 [2024-07-15 12:59:53.664565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.890 [2024-07-15 12:59:53.664573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.890 [2024-07-15 12:59:53.664592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.890 qpair failed and we were unable to recover it. 00:30:01.890 [2024-07-15 12:59:53.674648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.890 [2024-07-15 12:59:53.674763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.674783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.674792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.674800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.674819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.684478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.684578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.684598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.684608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.684620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.684639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.694529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.694614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.694634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.694643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.694651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.694669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.704535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.704621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.704641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.704650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.704658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.704677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.714823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.714947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.714967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.714976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.714984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.715003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.724621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.724720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.724739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.724748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.724756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.724775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.734652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.734753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.734773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.734783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.734791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.734810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.744666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.744793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.744812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.744821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.744830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.744848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.754926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.755067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.755086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.755095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.755103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.755123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.764765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.764857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.764877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.764886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.764895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.891 [2024-07-15 12:59:53.764913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.891 qpair failed and we were unable to recover it. 00:30:01.891 [2024-07-15 12:59:53.774783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.891 [2024-07-15 12:59:53.774876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.891 [2024-07-15 12:59:53.774896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.891 [2024-07-15 12:59:53.774906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.891 [2024-07-15 12:59:53.774918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.774936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:01.892 [2024-07-15 12:59:53.784818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.892 [2024-07-15 12:59:53.784946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.892 [2024-07-15 12:59:53.784966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.892 [2024-07-15 12:59:53.784975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.892 [2024-07-15 12:59:53.784983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.785003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:01.892 [2024-07-15 12:59:53.795091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.892 [2024-07-15 12:59:53.795216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.892 [2024-07-15 12:59:53.795235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.892 [2024-07-15 12:59:53.795245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.892 [2024-07-15 12:59:53.795253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.795278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:01.892 [2024-07-15 12:59:53.804931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.892 [2024-07-15 12:59:53.805039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.892 [2024-07-15 12:59:53.805058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.892 [2024-07-15 12:59:53.805067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.892 [2024-07-15 12:59:53.805075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.805093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:01.892 [2024-07-15 12:59:53.814929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.892 [2024-07-15 12:59:53.815024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.892 [2024-07-15 12:59:53.815044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.892 [2024-07-15 12:59:53.815054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.892 [2024-07-15 12:59:53.815062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.815080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:01.892 [2024-07-15 12:59:53.824976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.892 [2024-07-15 12:59:53.825070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.892 [2024-07-15 12:59:53.825090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.892 [2024-07-15 12:59:53.825100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.892 [2024-07-15 12:59:53.825108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:01.892 [2024-07-15 12:59:53.825126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.892 qpair failed and we were unable to recover it. 00:30:02.152 [2024-07-15 12:59:53.835237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-07-15 12:59:53.835370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-07-15 12:59:53.835390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-07-15 12:59:53.835400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-07-15 12:59:53.835409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.152 [2024-07-15 12:59:53.835428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-07-15 12:59:53.845015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-07-15 12:59:53.845103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-07-15 12:59:53.845123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-07-15 12:59:53.845132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-07-15 12:59:53.845140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.152 [2024-07-15 12:59:53.845158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-07-15 12:59:53.855127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-07-15 12:59:53.855211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-07-15 12:59:53.855230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-07-15 12:59:53.855239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-07-15 12:59:53.855248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.152 [2024-07-15 12:59:53.855270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-07-15 12:59:53.865081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-07-15 12:59:53.865167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-07-15 12:59:53.865186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-07-15 12:59:53.865196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-07-15 12:59:53.865208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.152 [2024-07-15 12:59:53.865227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-07-15 12:59:53.875327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.875445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.875464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.875473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.875482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.875500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.885215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.885317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.885336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.885345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.885353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.885371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.895213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.895349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.895368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.895378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.895386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.895405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.905207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.905315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.905334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.905344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.905352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.905371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.915444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.915563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.915583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.915592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.915600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.915619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.925296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.925433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.925452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.925461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.925470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.925488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.935330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.935427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.935446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.935455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.935463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.935482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.945359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.945443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.945462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.945472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.945480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.945499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.955611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.955727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.955746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.955759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.955768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.955786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.965532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.965648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.965666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.965675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.965684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.965702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.975460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.975547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.975566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.975575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.975583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.975601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.985508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.985597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.985616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.985625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.985633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.985651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:53.995750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:53.995866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:53.995885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:53.995894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:53.995902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:53.995920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:54.005559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:54.005678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:54.005697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:54.005706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:54.005715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.153 [2024-07-15 12:59:54.005734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-07-15 12:59:54.015690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-07-15 12:59:54.015784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-07-15 12:59:54.015803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-07-15 12:59:54.015812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-07-15 12:59:54.015820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.015839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.025618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.025714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.025733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.025743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.025751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.025769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.035932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.036053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.036072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.036081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.036089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.036108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.045666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.045804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.045824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.045837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.045845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.045863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.055750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.055852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.055872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.055881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.055889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.055908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.065767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.065869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.065889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.065898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.065906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.065925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.076035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.076189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.076207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.076217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.076225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.076243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-07-15 12:59:54.085843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-07-15 12:59:54.085933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-07-15 12:59:54.085952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-07-15 12:59:54.085961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-07-15 12:59:54.085970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.154 [2024-07-15 12:59:54.085988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.095875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.095971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.095990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.095999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.096007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.096026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.105962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.106097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.106116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.106125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.106134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.106152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.116044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.116161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.116180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.116189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.116197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.116215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.126034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.126172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.126191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.126200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.126208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.126227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.136007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.136140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.136163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.136173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.136181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.136199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.146005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.146093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.146113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.415 [2024-07-15 12:59:54.146122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.415 [2024-07-15 12:59:54.146131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.415 [2024-07-15 12:59:54.146149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.415 qpair failed and we were unable to recover it. 00:30:02.415 [2024-07-15 12:59:54.156269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.415 [2024-07-15 12:59:54.156382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.415 [2024-07-15 12:59:54.156402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.156411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.156420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.156438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.166104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.166200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.166220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.166229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.166237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.166262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.176116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.176205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.176225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.176234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.176242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.176288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.186212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.186364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.186384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.186393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.186401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.186420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.196384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.196502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.196522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.196531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.196540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.196558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.206234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.206343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.206363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.206372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.206380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.206399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.216290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.216378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.216398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.216407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.216415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.216434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.226303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.226390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.226413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.226422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.226430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.226449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.236504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.236624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.236643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.236653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.236661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.236679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.246358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.246479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.246498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.246508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.246516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.246535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.256427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.256524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.256543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.256552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.256561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.256579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.266465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.266555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.266574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.266583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.266592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.266615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.276663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.276779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.276798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.276807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.276815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.276834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.286519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.286608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.286628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.286637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.286645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.416 [2024-07-15 12:59:54.286664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-07-15 12:59:54.296537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-07-15 12:59:54.296619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-07-15 12:59:54.296638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-07-15 12:59:54.296647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-07-15 12:59:54.296655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.296674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-07-15 12:59:54.306572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-07-15 12:59:54.306666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-07-15 12:59:54.306686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-07-15 12:59:54.306695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-07-15 12:59:54.306703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.306721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-07-15 12:59:54.316732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-07-15 12:59:54.316888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-07-15 12:59:54.316912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-07-15 12:59:54.316921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-07-15 12:59:54.316929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.316948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-07-15 12:59:54.326701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-07-15 12:59:54.326841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-07-15 12:59:54.326860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-07-15 12:59:54.326869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-07-15 12:59:54.326878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.326896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-07-15 12:59:54.336631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-07-15 12:59:54.336726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-07-15 12:59:54.336744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-07-15 12:59:54.336753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-07-15 12:59:54.336762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.336780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-07-15 12:59:54.346709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-07-15 12:59:54.346814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-07-15 12:59:54.346834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-07-15 12:59:54.346843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-07-15 12:59:54.346851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.417 [2024-07-15 12:59:54.346869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.356958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.357072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.357092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.357101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.357110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.357132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.366779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.366878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.366897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.366906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.366915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.366934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.376820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.376971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.376990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.377000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.377008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.377026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.386900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.387036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.387056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.387066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.387075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.387093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.397069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.397204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.397223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.397232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.397240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.397265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.406921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.407029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.407052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.407062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.407070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.407089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.416931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.417015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.678 [2024-07-15 12:59:54.417035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.678 [2024-07-15 12:59:54.417044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.678 [2024-07-15 12:59:54.417052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.678 [2024-07-15 12:59:54.417071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.678 qpair failed and we were unable to recover it. 00:30:02.678 [2024-07-15 12:59:54.426966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.678 [2024-07-15 12:59:54.427081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.427101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.427110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.427118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.427137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.437342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.437530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.437550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.437560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.437568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.437588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.447031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.447129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.447149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.447158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.447170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.447190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.457072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.457181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.457201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.457210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.457218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.457237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.467117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.467210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.467230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.467239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.467247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.467271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.477346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.477478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.477497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.477507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.477515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.477533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.487206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.487307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.487327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.487336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.487345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.487363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.497162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.497287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.497306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.497316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.497324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.497342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.507296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.507423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.507443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.507453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.507461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.507480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.517474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.517591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.517611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.517621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.517629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.517648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.527345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.527435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.527456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.527465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.527474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.527493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.537383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.537468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.537488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.537497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.537510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.537528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.547420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.547520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.547542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.547552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.547561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.547581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-07-15 12:59:54.557562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-07-15 12:59:54.557678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-07-15 12:59:54.557698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-07-15 12:59:54.557708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-07-15 12:59:54.557717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.679 [2024-07-15 12:59:54.557736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 12:59:54.567501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-07-15 12:59:54.567645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-07-15 12:59:54.567666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-07-15 12:59:54.567675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-07-15 12:59:54.567684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.680 [2024-07-15 12:59:54.567704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 12:59:54.577479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-07-15 12:59:54.577583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-07-15 12:59:54.577603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-07-15 12:59:54.577612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-07-15 12:59:54.577621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.680 [2024-07-15 12:59:54.577639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 12:59:54.587535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-07-15 12:59:54.587639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-07-15 12:59:54.587658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-07-15 12:59:54.587668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-07-15 12:59:54.587676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.680 [2024-07-15 12:59:54.587694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 12:59:54.597801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-07-15 12:59:54.597918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-07-15 12:59:54.597937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-07-15 12:59:54.597947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-07-15 12:59:54.597955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.680 [2024-07-15 12:59:54.597974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-07-15 12:59:54.607586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-07-15 12:59:54.607674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-07-15 12:59:54.607694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-07-15 12:59:54.607704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-07-15 12:59:54.607713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.680 [2024-07-15 12:59:54.607731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.940 [2024-07-15 12:59:54.617587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.940 [2024-07-15 12:59:54.617694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.940 [2024-07-15 12:59:54.617714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.940 [2024-07-15 12:59:54.617724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.940 [2024-07-15 12:59:54.617732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.940 [2024-07-15 12:59:54.617751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-07-15 12:59:54.627596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.940 [2024-07-15 12:59:54.627680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.940 [2024-07-15 12:59:54.627700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.940 [2024-07-15 12:59:54.627709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.940 [2024-07-15 12:59:54.627722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.940 [2024-07-15 12:59:54.627741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-07-15 12:59:54.637833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.940 [2024-07-15 12:59:54.637957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.940 [2024-07-15 12:59:54.637977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.940 [2024-07-15 12:59:54.637986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.637995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.638014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.647700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.647829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.647848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.647858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.647866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.647885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.657738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.657821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.657841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.657850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.657859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.657877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.667741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.667872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.667891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.667901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.667909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.667928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.678044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.678161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.678181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.678190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.678198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.678217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.687819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.687924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.687944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.687953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.687962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.687981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.697909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.698003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.698023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.698033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.698042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.698061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.707844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.707939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.707960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.707970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.707979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.707998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.718144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.718306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.718326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.718340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.718349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.718368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.727978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.728070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.728089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.728099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.728107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.728126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.738081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.738170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.738190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.738199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.738208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.738226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.748047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.748134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.748154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.748163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.748171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.748189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.758275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.758400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.758422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.758432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.758440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.758460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.768112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.768219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.768238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.768248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.768262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.768282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-07-15 12:59:54.778134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.941 [2024-07-15 12:59:54.778275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.941 [2024-07-15 12:59:54.778294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.941 [2024-07-15 12:59:54.778304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.941 [2024-07-15 12:59:54.778312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.941 [2024-07-15 12:59:54.778330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.788158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.788245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.788269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.788279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.788288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.788306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.798408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.798524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.798543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.798552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.798561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.798580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.808263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.808357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.808377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.808390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.808399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.808418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.818305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.818401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.818421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.818430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.818439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.818458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.828309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.828415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.828435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.828444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.828453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.828472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.838575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.838718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.838737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.838746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.838754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.838772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.848462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.848585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.848605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.848613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.848622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.848642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.858478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.858580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.858600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.858609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.858617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.858636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.868377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.868489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.868509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.868518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.868526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.868546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-07-15 12:59:54.878719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.942 [2024-07-15 12:59:54.878834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.942 [2024-07-15 12:59:54.878854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.942 [2024-07-15 12:59:54.878863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.942 [2024-07-15 12:59:54.878871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:02.942 [2024-07-15 12:59:54.878889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.942 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.888508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.888612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.888632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.888641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.888650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.888668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.898538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.898624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.898644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.898657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.898665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.898684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.908584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.908689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.908709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.908718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.908726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.908744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.918752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.918889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.918908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.918917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.918925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.918944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.928653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.928747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.928766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.928775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.928784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.928803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.938718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.938852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.938872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.938881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.938889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.938908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.948703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.948792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.948813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.948822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.948830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.948849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.958946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.959077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.959096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.959105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.959113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.203 [2024-07-15 12:59:54.959132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.203 qpair failed and we were unable to recover it. 00:30:03.203 [2024-07-15 12:59:54.968804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.203 [2024-07-15 12:59:54.968912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.203 [2024-07-15 12:59:54.968932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.203 [2024-07-15 12:59:54.968941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.203 [2024-07-15 12:59:54.968950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:54.968968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:54.978830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:54.978918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:54.978937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:54.978946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:54.978954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:54.978973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:54.988807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:54.988933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:54.988955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:54.988964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:54.988973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:54.988991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:54.999123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:54.999264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:54.999284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:54.999294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:54.999303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:54.999321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.008974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.009107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.009127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.009136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.009144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.009163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.018991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.019079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.019098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.019108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.019116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.019134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.029023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.029137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.029156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.029165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.029174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.029197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.039278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.039464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.039482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.039491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.039500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.039520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.049060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.049153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.049173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.049182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.049191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.049209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.059089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.059173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.059192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.059201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.059210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.059228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.069185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.069332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.069352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.069361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.069370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.069389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.079366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.079486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.079509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.079518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.079527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.079546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.089183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.089300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.089320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.089329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.089337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.089356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.099298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.099392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.099411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.099421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.099429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.099449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.109206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.204 [2024-07-15 12:59:55.109332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.204 [2024-07-15 12:59:55.109351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.204 [2024-07-15 12:59:55.109361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.204 [2024-07-15 12:59:55.109369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.204 [2024-07-15 12:59:55.109388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.204 qpair failed and we were unable to recover it. 00:30:03.204 [2024-07-15 12:59:55.119518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.205 [2024-07-15 12:59:55.119631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.205 [2024-07-15 12:59:55.119650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.205 [2024-07-15 12:59:55.119659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.205 [2024-07-15 12:59:55.119668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.205 [2024-07-15 12:59:55.119690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.205 qpair failed and we were unable to recover it. 00:30:03.205 [2024-07-15 12:59:55.129403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.205 [2024-07-15 12:59:55.129497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.205 [2024-07-15 12:59:55.129517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.205 [2024-07-15 12:59:55.129526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.205 [2024-07-15 12:59:55.129534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.205 [2024-07-15 12:59:55.129553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.205 qpair failed and we were unable to recover it. 00:30:03.205 [2024-07-15 12:59:55.139403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.205 [2024-07-15 12:59:55.139500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.205 [2024-07-15 12:59:55.139520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.205 [2024-07-15 12:59:55.139529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.205 [2024-07-15 12:59:55.139537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.205 [2024-07-15 12:59:55.139556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.205 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.149413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.149506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.149525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.149535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.149544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.149563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.159632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.159824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.159844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.159853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.159862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.159881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.169479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.169602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.169625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.169635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.169643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.169662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.179505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.179611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.179631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.179640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.179648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.179667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.189557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.189648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.189668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.189678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.189686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.189705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.199792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.199909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.199929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.199938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.199946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.199965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.209544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.465 [2024-07-15 12:59:55.209638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.465 [2024-07-15 12:59:55.209657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.465 [2024-07-15 12:59:55.209667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.465 [2024-07-15 12:59:55.209675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.465 [2024-07-15 12:59:55.209698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.465 qpair failed and we were unable to recover it. 00:30:03.465 [2024-07-15 12:59:55.219623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.219713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.219732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.219741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.219750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.219768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.229659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.229779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.229798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.229807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.229815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.229833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.239888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.240003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.240022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.240031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.240040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.240058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.249725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.249819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.249839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.249848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.249857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.249875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.259764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.259882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.259905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.259914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.259923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.259942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.269814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.269970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.269990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.269999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.270007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.270026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.280035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.280152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.280171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.280181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.280189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.280208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.289842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.289942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.289962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.289971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.289980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.289998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.300071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.300177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.300196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.300206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.300218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.300237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.309967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.310055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.310076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.310085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.310093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.310112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.320283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.320397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.320416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.320426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.320434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.320453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.330002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.330102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.330122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.330131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.330139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.330158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.340012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.340147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.340166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.340175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.340183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.340202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.350075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.466 [2024-07-15 12:59:55.350169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.466 [2024-07-15 12:59:55.350189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.466 [2024-07-15 12:59:55.350198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.466 [2024-07-15 12:59:55.350206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.466 [2024-07-15 12:59:55.350225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.466 qpair failed and we were unable to recover it. 00:30:03.466 [2024-07-15 12:59:55.360308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.467 [2024-07-15 12:59:55.360447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.467 [2024-07-15 12:59:55.360466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.467 [2024-07-15 12:59:55.360475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.467 [2024-07-15 12:59:55.360484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.467 [2024-07-15 12:59:55.360502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.467 qpair failed and we were unable to recover it. 00:30:03.467 [2024-07-15 12:59:55.370120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.467 [2024-07-15 12:59:55.370213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.467 [2024-07-15 12:59:55.370233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.467 [2024-07-15 12:59:55.370242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.467 [2024-07-15 12:59:55.370251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.467 [2024-07-15 12:59:55.370276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.467 qpair failed and we were unable to recover it. 00:30:03.467 [2024-07-15 12:59:55.380190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.467 [2024-07-15 12:59:55.380285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.467 [2024-07-15 12:59:55.380305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.467 [2024-07-15 12:59:55.380315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.467 [2024-07-15 12:59:55.380323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.467 [2024-07-15 12:59:55.380342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.467 qpair failed and we were unable to recover it. 00:30:03.467 [2024-07-15 12:59:55.390197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.467 [2024-07-15 12:59:55.390307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.467 [2024-07-15 12:59:55.390326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.467 [2024-07-15 12:59:55.390335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.467 [2024-07-15 12:59:55.390347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.467 [2024-07-15 12:59:55.390365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.467 qpair failed and we were unable to recover it. 00:30:03.467 [2024-07-15 12:59:55.400489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.467 [2024-07-15 12:59:55.400606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.467 [2024-07-15 12:59:55.400626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.467 [2024-07-15 12:59:55.400635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.467 [2024-07-15 12:59:55.400643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.467 [2024-07-15 12:59:55.400662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.467 qpair failed and we were unable to recover it. 00:30:03.728 [2024-07-15 12:59:55.410328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.728 [2024-07-15 12:59:55.410428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.728 [2024-07-15 12:59:55.410448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.728 [2024-07-15 12:59:55.410457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.728 [2024-07-15 12:59:55.410466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.728 [2024-07-15 12:59:55.410486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.728 qpair failed and we were unable to recover it. 00:30:03.728 [2024-07-15 12:59:55.420294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.728 [2024-07-15 12:59:55.420390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.728 [2024-07-15 12:59:55.420409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.728 [2024-07-15 12:59:55.420418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.728 [2024-07-15 12:59:55.420427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.728 [2024-07-15 12:59:55.420446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.728 qpair failed and we were unable to recover it. 00:30:03.728 [2024-07-15 12:59:55.430391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.728 [2024-07-15 12:59:55.430477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.728 [2024-07-15 12:59:55.430496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.728 [2024-07-15 12:59:55.430506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.728 [2024-07-15 12:59:55.430514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.728 [2024-07-15 12:59:55.430533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.728 qpair failed and we were unable to recover it. 00:30:03.728 [2024-07-15 12:59:55.440604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.728 [2024-07-15 12:59:55.440727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.728 [2024-07-15 12:59:55.440746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.728 [2024-07-15 12:59:55.440756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.728 [2024-07-15 12:59:55.440764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.728 [2024-07-15 12:59:55.440782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.728 qpair failed and we were unable to recover it. 00:30:03.728 [2024-07-15 12:59:55.450401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.450505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.450524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.450533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.450542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.450560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.460489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.460618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.460638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.460648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.460656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.460675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.470483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.470572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.470592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.470602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.470610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.470628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.480713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.480859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.480878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.480891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.480899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.480919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.490589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.490686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.490705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.490714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.490723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.490741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.500628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.500745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.500764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.500773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.500781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.500799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.510648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.510747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.510767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.510776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.510785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.510803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.520840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.520955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.520974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.520983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.520991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.521009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.530704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.530817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.530837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.530846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.530854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.729 [2024-07-15 12:59:55.530873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.729 qpair failed and we were unable to recover it. 00:30:03.729 [2024-07-15 12:59:55.540708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.729 [2024-07-15 12:59:55.540792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.729 [2024-07-15 12:59:55.540812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.729 [2024-07-15 12:59:55.540822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.729 [2024-07-15 12:59:55.540830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.730 [2024-07-15 12:59:55.540848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.550748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.550833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.550855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.550865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.550873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.730 [2024-07-15 12:59:55.550893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.560978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.561097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.561117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.561126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.561134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c4ed70 00:30:03.730 [2024-07-15 12:59:55.561154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.571022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.571192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.571253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.571320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.571353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75c8000b90 00:30:03.730 [2024-07-15 12:59:55.571419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.580902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.581105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.581152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.581186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.581217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75c8000b90 00:30:03.730 [2024-07-15 12:59:55.581292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.590891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.591003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.591039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.591054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.591066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:30:03.730 [2024-07-15 12:59:55.591094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.601122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.730 [2024-07-15 12:59:55.601246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.730 [2024-07-15 12:59:55.601279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.730 [2024-07-15 12:59:55.601293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.730 [2024-07-15 12:59:55.601305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f75d8000b90 00:30:03.730 [2024-07-15 12:59:55.601330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.730 qpair failed and we were unable to recover it. 00:30:03.730 [2024-07-15 12:59:55.601428] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:03.730 A controller has encountered a failure and is being reset. 00:30:03.730 Controller properly reset. 00:30:03.730 Initializing NVMe Controllers 00:30:03.730 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:03.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:03.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:03.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:03.730 Initialization complete. Launching workers. 00:30:03.730 Starting thread on core 1 00:30:03.730 Starting thread on core 2 00:30:03.730 Starting thread on core 3 00:30:03.730 Starting thread on core 0 00:30:03.730 12:59:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:03.730 00:30:03.730 real 0m11.348s 00:30:03.730 user 0m21.855s 00:30:03.730 sys 0m4.159s 00:30:03.730 12:59:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.730 12:59:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.730 ************************************ 00:30:03.730 END TEST nvmf_target_disconnect_tc2 00:30:03.730 ************************************ 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.990 rmmod nvme_tcp 00:30:03.990 rmmod nvme_fabrics 00:30:03.990 rmmod nvme_keyring 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.990 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4117632 ']' 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4117632 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4117632 ']' 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 4117632 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4117632 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4117632' 00:30:03.991 killing process with pid 4117632 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 4117632 00:30:03.991 12:59:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 4117632 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.250 12:59:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.822 12:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.822 00:30:06.822 real 0m20.083s 00:30:06.822 user 0m48.731s 00:30:06.823 sys 0m9.091s 00:30:06.823 12:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.823 12:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 ************************************ 00:30:06.823 END TEST nvmf_target_disconnect 00:30:06.823 ************************************ 00:30:06.823 12:59:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:06.823 12:59:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:06.823 12:59:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.823 12:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 12:59:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:06.823 00:30:06.823 real 23m25.336s 00:30:06.823 user 51m49.467s 00:30:06.823 sys 6m39.354s 00:30:06.823 12:59:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.823 12:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 ************************************ 00:30:06.823 END TEST nvmf_tcp 00:30:06.823 ************************************ 00:30:06.823 12:59:58 -- common/autotest_common.sh@1142 -- # return 0 00:30:06.823 12:59:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:06.823 12:59:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:06.823 12:59:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:06.823 12:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.823 12:59:58 -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 ************************************ 00:30:06.823 START TEST spdkcli_nvmf_tcp 00:30:06.823 ************************************ 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:06.823 * Looking for test storage... 00:30:06.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4119352 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4119352 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 4119352 ']' 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.823 12:59:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.823 [2024-07-15 12:59:58.520845] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:30:06.823 [2024-07-15 12:59:58.520904] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119352 ] 00:30:06.823 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.823 [2024-07-15 12:59:58.604442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:06.823 [2024-07-15 12:59:58.700283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.823 [2024-07-15 12:59:58.700288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.757 12:59:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:07.757 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:07.757 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:07.757 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:07.757 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:07.757 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:07.757 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:07.757 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.757 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.757 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.757 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:07.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:07.758 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:07.758 ' 00:30:11.044 [2024-07-15 13:00:02.231689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.980 [2024-07-15 13:00:03.552424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:14.514 [2024-07-15 13:00:06.008573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:16.418 [2024-07-15 13:00:08.147666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:18.321 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:18.321 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:18.321 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.321 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.321 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:18.321 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:18.321 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:18.321 13:00:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.580 13:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:18.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:18.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:18.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:18.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:18.580 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:18.580 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:18.580 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:18.580 ' 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:25.156 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:25.156 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:25.156 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:25.156 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 4119352 ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119352' 00:30:25.156 killing process with pid 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4119352 ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4119352 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 4119352 ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 4119352 00:30:25.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4119352) - No such process 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 4119352 is not found' 00:30:25.156 Process with pid 4119352 is not found 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:25.156 00:30:25.156 real 0m17.997s 00:30:25.156 user 0m39.692s 00:30:25.156 sys 0m1.018s 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.156 13:00:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.156 ************************************ 00:30:25.156 END TEST spdkcli_nvmf_tcp 00:30:25.156 ************************************ 00:30:25.156 13:00:16 -- common/autotest_common.sh@1142 -- # return 0 00:30:25.156 13:00:16 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:25.156 13:00:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:25.156 13:00:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.156 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:30:25.156 ************************************ 00:30:25.156 START TEST nvmf_identify_passthru 00:30:25.157 ************************************ 00:30:25.157 13:00:16 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:25.157 * Looking for test storage... 00:30:25.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.157 13:00:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:25.157 13:00:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:25.157 13:00:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.157 13:00:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.157 13:00:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.157 13:00:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:25.157 13:00:16 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.157 13:00:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:30.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:30.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:30.429 Found net devices under 0000:af:00.0: cvl_0_0 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:30.429 Found net devices under 0000:af:00.1: cvl_0_1 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.429 13:00:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:30.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:30:30.429 00:30:30.429 --- 10.0.0.2 ping statistics --- 00:30:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.429 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:30.429 00:30:30.429 --- 10.0.0.1 ping statistics --- 00:30:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.429 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.429 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.430 13:00:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:30:30.430 13:00:22 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:86:00.0 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:30.430 13:00:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:30.430 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.620 13:00:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:30:34.620 13:00:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:34.620 13:00:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:34.620 13:00:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:34.880 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4127579 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4127579 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 4127579 ']' 00:30:39.072 13:00:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:39.072 13:00:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.072 [2024-07-15 13:00:30.862582] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:30:39.072 [2024-07-15 13:00:30.862638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.072 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.072 [2024-07-15 13:00:30.948433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.331 [2024-07-15 13:00:31.039555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.331 [2024-07-15 13:00:31.039598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.331 [2024-07-15 13:00:31.039608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.331 [2024-07-15 13:00:31.039617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.331 [2024-07-15 13:00:31.039625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.331 [2024-07-15 13:00:31.039682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.331 [2024-07-15 13:00:31.039806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.331 [2024-07-15 13:00:31.039845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.331 [2024-07-15 13:00:31.039845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:39.898 13:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.898 INFO: Log level set to 20 00:30:39.898 INFO: Requests: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "method": "nvmf_set_config", 00:30:39.898 "id": 1, 00:30:39.898 "params": { 00:30:39.898 "admin_cmd_passthru": { 00:30:39.898 "identify_ctrlr": true 00:30:39.898 } 00:30:39.898 } 00:30:39.898 } 00:30:39.898 00:30:39.898 INFO: response: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "id": 1, 00:30:39.898 "result": true 00:30:39.898 } 00:30:39.898 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.898 13:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.898 INFO: Setting log level to 20 00:30:39.898 INFO: Setting log level to 20 00:30:39.898 INFO: Log level set to 20 00:30:39.898 INFO: Log level set to 20 00:30:39.898 INFO: Requests: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "method": "framework_start_init", 00:30:39.898 "id": 1 00:30:39.898 } 00:30:39.898 00:30:39.898 INFO: Requests: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "method": "framework_start_init", 00:30:39.898 "id": 1 00:30:39.898 } 00:30:39.898 00:30:39.898 [2024-07-15 13:00:31.822812] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:39.898 INFO: response: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "id": 1, 00:30:39.898 "result": true 00:30:39.898 } 00:30:39.898 00:30:39.898 INFO: response: 00:30:39.898 { 00:30:39.898 "jsonrpc": "2.0", 00:30:39.898 "id": 1, 00:30:39.898 "result": true 00:30:39.898 } 00:30:39.898 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.898 13:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.898 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.898 INFO: Setting log level to 40 00:30:39.898 INFO: Setting log level to 40 00:30:39.898 INFO: Setting log level to 40 00:30:39.898 [2024-07-15 13:00:31.836773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.156 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.156 13:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:40.156 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:40.156 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.156 13:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:30:40.156 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.156 13:00:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.436 Nvme0n1 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.436 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.436 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.436 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.436 [2024-07-15 13:00:34.765314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.436 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.436 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.436 [ 00:30:43.436 { 00:30:43.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.436 "subtype": "Discovery", 00:30:43.436 "listen_addresses": [], 00:30:43.436 "allow_any_host": true, 00:30:43.437 "hosts": [] 00:30:43.437 }, 00:30:43.437 { 00:30:43.437 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.437 "subtype": "NVMe", 00:30:43.437 "listen_addresses": [ 00:30:43.437 { 00:30:43.437 "trtype": "TCP", 00:30:43.437 "adrfam": "IPv4", 00:30:43.437 "traddr": "10.0.0.2", 00:30:43.437 "trsvcid": "4420" 00:30:43.437 } 00:30:43.437 ], 00:30:43.437 "allow_any_host": true, 00:30:43.437 "hosts": [], 00:30:43.437 "serial_number": "SPDK00000000000001", 00:30:43.437 "model_number": "SPDK bdev Controller", 00:30:43.437 "max_namespaces": 1, 00:30:43.437 "min_cntlid": 1, 00:30:43.437 "max_cntlid": 65519, 00:30:43.437 "namespaces": [ 00:30:43.437 { 00:30:43.437 "nsid": 1, 00:30:43.437 "bdev_name": "Nvme0n1", 00:30:43.437 "name": "Nvme0n1", 00:30:43.437 "nguid": "3D813A6330DA4205AF46DD3AF9B1D4F7", 00:30:43.437 "uuid": "3d813a63-30da-4205-af46-dd3af9b1d4f7" 00:30:43.437 } 00:30:43.437 ] 00:30:43.437 } 00:30:43.437 ] 00:30:43.437 13:00:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.437 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:43.437 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:43.437 13:00:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:43.437 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:43.437 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:43.437 13:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.437 rmmod nvme_tcp 00:30:43.437 rmmod nvme_fabrics 00:30:43.437 rmmod nvme_keyring 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4127579 ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4127579 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 4127579 ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 4127579 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4127579 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4127579' 00:30:43.437 killing process with pid 4127579 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 4127579 00:30:43.437 13:00:35 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 4127579 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.339 13:00:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.339 13:00:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.339 13:00:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.245 13:00:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.245 00:30:47.245 real 0m22.603s 00:30:47.245 user 0m31.030s 00:30:47.245 sys 0m5.182s 00:30:47.245 13:00:39 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.245 13:00:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 ************************************ 00:30:47.245 END TEST nvmf_identify_passthru 00:30:47.245 ************************************ 00:30:47.245 13:00:39 -- common/autotest_common.sh@1142 -- # return 0 00:30:47.245 13:00:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.245 13:00:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:47.245 13:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.245 13:00:39 -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 ************************************ 00:30:47.245 START TEST nvmf_dif 00:30:47.245 ************************************ 00:30:47.245 13:00:39 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.245 * Looking for test storage... 00:30:47.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.245 13:00:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.245 13:00:39 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.245 13:00:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.245 13:00:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.245 13:00:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.245 13:00:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.245 13:00:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.245 13:00:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.504 13:00:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:47.504 13:00:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.504 13:00:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:47.504 13:00:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:47.504 13:00:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:47.504 13:00:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:47.504 13:00:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.504 13:00:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.504 13:00:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.504 13:00:39 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.504 13:00:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:52.773 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:52.773 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:52.773 Found net devices under 0000:af:00.0: cvl_0_0 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:52.773 Found net devices under 0000:af:00.1: cvl_0_1 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.773 13:00:44 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:53.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:30:53.031 00:30:53.031 --- 10.0.0.2 ping statistics --- 00:30:53.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.031 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:30:53.031 00:30:53.031 --- 10.0.0.1 ping statistics --- 00:30:53.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.031 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:53.031 13:00:44 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:56.312 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:56.312 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:56.312 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:56.312 13:00:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:56.312 13:00:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=4133399 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 4133399 00:30:56.312 13:00:47 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 4133399 ']' 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:56.312 13:00:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.312 [2024-07-15 13:00:47.803245] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:30:56.312 [2024-07-15 13:00:47.803312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.312 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.312 [2024-07-15 13:00:47.887583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.312 [2024-07-15 13:00:47.976431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.312 [2024-07-15 13:00:47.976473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.312 [2024-07-15 13:00:47.976483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.313 [2024-07-15 13:00:47.976492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.313 [2024-07-15 13:00:47.976499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.313 [2024-07-15 13:00:47.976522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:56.313 13:00:48 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 13:00:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.313 13:00:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:56.313 13:00:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 [2024-07-15 13:00:48.119748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.313 13:00:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 ************************************ 00:30:56.313 START TEST fio_dif_1_default 00:30:56.313 ************************************ 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 bdev_null0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:56.313 [2024-07-15 13:00:48.192067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:56.313 { 00:30:56.313 "params": { 00:30:56.313 "name": "Nvme$subsystem", 00:30:56.313 "trtype": "$TEST_TRANSPORT", 00:30:56.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.313 "adrfam": "ipv4", 00:30:56.313 "trsvcid": "$NVMF_PORT", 00:30:56.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.313 "hdgst": ${hdgst:-false}, 00:30:56.313 "ddgst": ${ddgst:-false} 00:30:56.313 }, 00:30:56.313 "method": "bdev_nvme_attach_controller" 00:30:56.313 } 00:30:56.313 EOF 00:30:56.313 )") 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:56.313 "params": { 00:30:56.313 "name": "Nvme0", 00:30:56.313 "trtype": "tcp", 00:30:56.313 "traddr": "10.0.0.2", 00:30:56.313 "adrfam": "ipv4", 00:30:56.313 "trsvcid": "4420", 00:30:56.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:56.313 "hdgst": false, 00:30:56.313 "ddgst": false 00:30:56.313 }, 00:30:56.313 "method": "bdev_nvme_attach_controller" 00:30:56.313 }' 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:56.313 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:56.599 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.599 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.599 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:56.599 13:00:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.864 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:56.865 fio-3.35 00:30:56.865 Starting 1 thread 00:30:56.865 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.072 00:31:09.073 filename0: (groupid=0, jobs=1): err= 0: pid=4133813: Mon Jul 15 13:00:59 2024 00:31:09.073 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10034msec) 00:31:09.073 slat (nsec): min=9780, max=61839, avg=20694.86, stdev=2195.63 00:31:09.073 clat (usec): min=666, max=45281, avg=21110.84, stdev=20269.85 00:31:09.073 lat (usec): min=686, max=45307, avg=21131.53, stdev=20269.74 00:31:09.073 clat percentiles (usec): 00:31:09.073 | 1.00th=[ 676], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 734], 00:31:09.073 | 30.00th=[ 766], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:31:09.073 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:09.073 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:31:09.073 | 99.99th=[45351] 00:31:09.073 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=756.80, stdev=28.00, samples=20 00:31:09.073 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:31:09.073 lat (usec) : 750=26.95%, 1000=22.42% 00:31:09.073 lat (msec) : 2=0.42%, 50=50.21% 00:31:09.073 cpu : usr=94.44%, sys=5.01%, ctx=14, majf=0, minf=251 00:31:09.073 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.073 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.073 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:09.073 00:31:09.073 Run status group 0 (all jobs): 00:31:09.073 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7584KiB (7766kB), run=10034-10034msec 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 00:31:09.073 real 0m11.264s 00:31:09.073 user 0m20.995s 00:31:09.073 sys 0m0.841s 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 ************************************ 00:31:09.073 END TEST fio_dif_1_default 00:31:09.073 ************************************ 00:31:09.073 13:00:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:09.073 13:00:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:09.073 13:00:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:09.073 13:00:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 ************************************ 00:31:09.073 START TEST fio_dif_1_multi_subsystems 00:31:09.073 ************************************ 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 bdev_null0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 [2024-07-15 13:00:59.523223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 bdev_null1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:09.073 { 00:31:09.073 "params": { 00:31:09.073 "name": "Nvme$subsystem", 00:31:09.073 "trtype": "$TEST_TRANSPORT", 00:31:09.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.073 "adrfam": "ipv4", 00:31:09.073 "trsvcid": "$NVMF_PORT", 00:31:09.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.073 "hdgst": ${hdgst:-false}, 00:31:09.073 "ddgst": ${ddgst:-false} 00:31:09.073 }, 00:31:09.073 "method": "bdev_nvme_attach_controller" 00:31:09.073 } 00:31:09.073 EOF 00:31:09.073 )") 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:09.073 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:09.074 { 00:31:09.074 "params": { 00:31:09.074 "name": "Nvme$subsystem", 00:31:09.074 "trtype": "$TEST_TRANSPORT", 00:31:09.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.074 "adrfam": "ipv4", 00:31:09.074 "trsvcid": "$NVMF_PORT", 00:31:09.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.074 "hdgst": ${hdgst:-false}, 00:31:09.074 "ddgst": ${ddgst:-false} 00:31:09.074 }, 00:31:09.074 "method": "bdev_nvme_attach_controller" 00:31:09.074 } 00:31:09.074 EOF 00:31:09.074 )") 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:09.074 "params": { 00:31:09.074 "name": "Nvme0", 00:31:09.074 "trtype": "tcp", 00:31:09.074 "traddr": "10.0.0.2", 00:31:09.074 "adrfam": "ipv4", 00:31:09.074 "trsvcid": "4420", 00:31:09.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.074 "hdgst": false, 00:31:09.074 "ddgst": false 00:31:09.074 }, 00:31:09.074 "method": "bdev_nvme_attach_controller" 00:31:09.074 },{ 00:31:09.074 "params": { 00:31:09.074 "name": "Nvme1", 00:31:09.074 "trtype": "tcp", 00:31:09.074 "traddr": "10.0.0.2", 00:31:09.074 "adrfam": "ipv4", 00:31:09.074 "trsvcid": "4420", 00:31:09.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:09.074 "hdgst": false, 00:31:09.074 "ddgst": false 00:31:09.074 }, 00:31:09.074 "method": "bdev_nvme_attach_controller" 00:31:09.074 }' 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.074 13:00:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.074 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.074 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.074 fio-3.35 00:31:09.074 Starting 2 threads 00:31:09.074 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.143 00:31:19.143 filename0: (groupid=0, jobs=1): err= 0: pid=4135990: Mon Jul 15 13:01:10 2024 00:31:19.143 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10005msec) 00:31:19.143 slat (nsec): min=8210, max=32952, avg=10445.66, stdev=2247.98 00:31:19.143 clat (usec): min=603, max=42319, avg=20943.42, stdev=20313.81 00:31:19.143 lat (usec): min=612, max=42329, avg=20953.86, stdev=20313.23 00:31:19.143 clat percentiles (usec): 00:31:19.143 | 1.00th=[ 635], 5.00th=[ 644], 10.00th=[ 644], 20.00th=[ 660], 00:31:19.143 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 1663], 60.00th=[41157], 00:31:19.143 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.143 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:19.143 | 99.99th=[42206] 00:31:19.143 bw ( KiB/s): min= 736, max= 768, per=50.23%, avg=764.63, stdev=10.09, samples=19 00:31:19.143 iops : min= 184, max= 192, avg=191.16, stdev= 2.52, samples=19 00:31:19.144 lat (usec) : 750=48.17%, 1000=1.73% 00:31:19.144 lat (msec) : 2=0.21%, 50=49.90% 00:31:19.144 cpu : usr=97.37%, sys=2.32%, ctx=11, majf=0, minf=180 00:31:19.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.144 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:19.144 filename1: (groupid=0, jobs=1): err= 0: pid=4135991: Mon Jul 15 13:01:10 2024 00:31:19.144 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10014msec) 00:31:19.144 slat (nsec): min=7759, max=32626, avg=10442.37, stdev=2221.28 00:31:19.144 clat (usec): min=612, max=42511, avg=21050.68, stdev=20346.62 00:31:19.144 lat (usec): min=621, max=42520, avg=21061.12, stdev=20346.09 00:31:19.144 clat percentiles (usec): 00:31:19.144 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 644], 00:31:19.144 | 30.00th=[ 652], 40.00th=[ 709], 50.00th=[41157], 60.00th=[41157], 00:31:19.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:19.144 | 99.99th=[42730] 00:31:19.144 bw ( KiB/s): min= 704, max= 768, per=49.83%, avg=758.40, stdev=23.45, samples=20 00:31:19.144 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:31:19.144 lat (usec) : 750=44.79%, 1000=5.11% 00:31:19.144 lat (msec) : 50=50.11% 00:31:19.144 cpu : usr=97.10%, sys=2.59%, ctx=14, majf=0, minf=58 00:31:19.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.144 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:19.144 00:31:19.144 Run status group 0 (all jobs): 00:31:19.144 READ: bw=1521KiB/s (1558kB/s), 759KiB/s-763KiB/s (777kB/s-781kB/s), io=14.9MiB (15.6MB), run=10005-10014msec 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.144 00:31:19.144 real 0m11.501s 00:31:19.144 user 0m31.617s 00:31:19.144 sys 0m0.838s 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.144 13:01:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 ************************************ 00:31:19.144 END TEST fio_dif_1_multi_subsystems 00:31:19.144 ************************************ 00:31:19.144 13:01:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:19.144 13:01:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:19.144 13:01:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:19.144 13:01:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.144 13:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 ************************************ 00:31:19.144 START TEST fio_dif_rand_params 00:31:19.144 ************************************ 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.144 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:19.144 bdev_null0 00:31:19.403 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.403 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:19.403 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:19.404 [2024-07-15 13:01:11.103518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.404 { 00:31:19.404 "params": { 00:31:19.404 "name": "Nvme$subsystem", 00:31:19.404 "trtype": "$TEST_TRANSPORT", 00:31:19.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.404 "adrfam": "ipv4", 00:31:19.404 "trsvcid": "$NVMF_PORT", 00:31:19.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.404 "hdgst": ${hdgst:-false}, 00:31:19.404 "ddgst": ${ddgst:-false} 00:31:19.404 }, 00:31:19.404 "method": "bdev_nvme_attach_controller" 00:31:19.404 } 00:31:19.404 EOF 00:31:19.404 )") 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:19.404 "params": { 00:31:19.404 "name": "Nvme0", 00:31:19.404 "trtype": "tcp", 00:31:19.404 "traddr": "10.0.0.2", 00:31:19.404 "adrfam": "ipv4", 00:31:19.404 "trsvcid": "4420", 00:31:19.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.404 "hdgst": false, 00:31:19.404 "ddgst": false 00:31:19.404 }, 00:31:19.404 "method": "bdev_nvme_attach_controller" 00:31:19.404 }' 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:19.404 13:01:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.663 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:19.663 ... 00:31:19.663 fio-3.35 00:31:19.663 Starting 3 threads 00:31:19.663 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.228 00:31:26.228 filename0: (groupid=0, jobs=1): err= 0: pid=4138047: Mon Jul 15 13:01:17 2024 00:31:26.228 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(134MiB/5045msec) 00:31:26.228 slat (nsec): min=9402, max=55718, avg=18012.20, stdev=6543.27 00:31:26.228 clat (usec): min=5377, max=57308, avg=14096.10, stdev=7461.63 00:31:26.228 lat (usec): min=5387, max=57342, avg=14114.11, stdev=7461.46 00:31:26.228 clat percentiles (usec): 00:31:26.228 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11469], 00:31:26.228 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13435], 00:31:26.228 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15401], 95.00th=[16319], 00:31:26.228 | 99.00th=[54264], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:31:26.228 | 99.99th=[57410] 00:31:26.228 bw ( KiB/s): min=24064, max=31488, per=33.51%, avg=27315.20, stdev=2165.69, samples=10 00:31:26.228 iops : min= 188, max= 246, avg=213.40, stdev=16.92, samples=10 00:31:26.228 lat (msec) : 10=8.79%, 20=87.93%, 50=0.19%, 100=3.09% 00:31:26.228 cpu : usr=95.84%, sys=3.35%, ctx=215, majf=0, minf=87 00:31:26.228 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.228 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.228 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.228 filename0: (groupid=0, jobs=1): err= 0: pid=4138048: Mon Jul 15 13:01:17 2024 00:31:26.228 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5043msec) 00:31:26.228 slat (nsec): min=9531, max=77729, avg=18662.04, stdev=7640.94 00:31:26.229 clat (usec): min=5272, max=58143, avg=14596.59, stdev=6060.44 00:31:26.229 lat (usec): min=5285, max=58158, avg=14615.25, stdev=6060.10 00:31:26.229 clat percentiles (usec): 00:31:26.229 | 1.00th=[ 5932], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[11994], 00:31:26.229 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14484], 60.00th=[14877], 00:31:26.229 | 70.00th=[15270], 80.00th=[15926], 90.00th=[16909], 95.00th=[17695], 00:31:26.229 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56886], 99.95th=[57934], 00:31:26.229 | 99.99th=[57934] 00:31:26.229 bw ( KiB/s): min=24064, max=28416, per=32.31%, avg=26339.56, stdev=1450.67, samples=9 00:31:26.229 iops : min= 188, max= 222, avg=205.78, stdev=11.33, samples=9 00:31:26.229 lat (msec) : 10=11.34%, 20=86.72%, 50=0.19%, 100=1.74% 00:31:26.229 cpu : usr=96.73%, sys=2.88%, ctx=16, majf=0, minf=166 00:31:26.229 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.229 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.229 filename0: (groupid=0, jobs=1): err= 0: pid=4138049: Mon Jul 15 13:01:17 2024 00:31:26.229 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5003msec) 00:31:26.229 slat (nsec): min=9572, max=42430, avg=21560.19, stdev=7119.85 00:31:26.229 clat (usec): min=6897, max=54629, avg=13470.85, stdev=4197.64 00:31:26.229 lat (usec): min=6907, max=54656, avg=13492.41, stdev=4197.83 00:31:26.229 clat percentiles (usec): 00:31:26.229 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[11076], 00:31:26.229 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13566], 60.00th=[13960], 00:31:26.229 | 70.00th=[14484], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:31:26.229 | 99.00th=[18482], 99.50th=[52167], 99.90th=[54264], 99.95th=[54789], 00:31:26.229 | 99.99th=[54789] 00:31:26.229 bw ( KiB/s): min=24832, max=30976, per=34.83%, avg=28393.78, stdev=1671.32, samples=9 00:31:26.229 iops : min= 194, max= 242, avg=221.78, stdev=13.06, samples=9 00:31:26.229 lat (msec) : 10=13.49%, 20=85.70%, 50=0.09%, 100=0.72% 00:31:26.229 cpu : usr=95.06%, sys=3.94%, ctx=127, majf=0, minf=92 00:31:26.229 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.229 issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.229 00:31:26.229 Run status group 0 (all jobs): 00:31:26.229 READ: bw=79.6MiB/s (83.5MB/s), 25.6MiB/s-27.8MiB/s (26.8MB/s-29.1MB/s), io=402MiB (421MB), run=5003-5045msec 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 bdev_null0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 [2024-07-15 13:01:17.509885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 bdev_null1 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 bdev_null2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.229 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:26.229 { 00:31:26.229 "params": { 00:31:26.229 "name": "Nvme$subsystem", 00:31:26.229 "trtype": "$TEST_TRANSPORT", 00:31:26.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.229 "adrfam": "ipv4", 00:31:26.229 "trsvcid": "$NVMF_PORT", 00:31:26.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.230 "hdgst": ${hdgst:-false}, 00:31:26.230 "ddgst": ${ddgst:-false} 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 } 00:31:26.230 EOF 00:31:26.230 )") 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:26.230 { 00:31:26.230 "params": { 00:31:26.230 "name": "Nvme$subsystem", 00:31:26.230 "trtype": "$TEST_TRANSPORT", 00:31:26.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.230 "adrfam": "ipv4", 00:31:26.230 "trsvcid": "$NVMF_PORT", 00:31:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.230 "hdgst": ${hdgst:-false}, 00:31:26.230 "ddgst": ${ddgst:-false} 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 } 00:31:26.230 EOF 00:31:26.230 )") 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:26.230 { 00:31:26.230 "params": { 00:31:26.230 "name": "Nvme$subsystem", 00:31:26.230 "trtype": "$TEST_TRANSPORT", 00:31:26.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.230 "adrfam": "ipv4", 00:31:26.230 "trsvcid": "$NVMF_PORT", 00:31:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.230 "hdgst": ${hdgst:-false}, 00:31:26.230 "ddgst": ${ddgst:-false} 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 } 00:31:26.230 EOF 00:31:26.230 )") 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:26.230 "params": { 00:31:26.230 "name": "Nvme0", 00:31:26.230 "trtype": "tcp", 00:31:26.230 "traddr": "10.0.0.2", 00:31:26.230 "adrfam": "ipv4", 00:31:26.230 "trsvcid": "4420", 00:31:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.230 "hdgst": false, 00:31:26.230 "ddgst": false 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 },{ 00:31:26.230 "params": { 00:31:26.230 "name": "Nvme1", 00:31:26.230 "trtype": "tcp", 00:31:26.230 "traddr": "10.0.0.2", 00:31:26.230 "adrfam": "ipv4", 00:31:26.230 "trsvcid": "4420", 00:31:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:26.230 "hdgst": false, 00:31:26.230 "ddgst": false 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 },{ 00:31:26.230 "params": { 00:31:26.230 "name": "Nvme2", 00:31:26.230 "trtype": "tcp", 00:31:26.230 "traddr": "10.0.0.2", 00:31:26.230 "adrfam": "ipv4", 00:31:26.230 "trsvcid": "4420", 00:31:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:26.230 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:26.230 "hdgst": false, 00:31:26.230 "ddgst": false 00:31:26.230 }, 00:31:26.230 "method": "bdev_nvme_attach_controller" 00:31:26.230 }' 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:26.230 13:01:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.230 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.230 ... 00:31:26.230 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.230 ... 00:31:26.230 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.230 ... 00:31:26.230 fio-3.35 00:31:26.230 Starting 24 threads 00:31:26.230 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.444 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139382: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=427, BW=1712KiB/s (1753kB/s)(16.8MiB/10021msec) 00:31:38.444 slat (nsec): min=6837, max=97150, avg=46288.52, stdev=18200.46 00:31:38.444 clat (usec): min=7027, max=52364, avg=37026.94, stdev=3245.08 00:31:38.444 lat (usec): min=7077, max=52382, avg=37073.23, stdev=3246.19 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[ 8717], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.444 | 99.00th=[38536], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:31:38.444 | 99.99th=[52167] 00:31:38.444 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1708.80, stdev=75.15, samples=20 00:31:38.444 iops : min= 416, max= 480, avg=427.20, stdev=18.79, samples=20 00:31:38.444 lat (msec) : 10=1.12%, 50=98.83%, 100=0.05% 00:31:38.444 cpu : usr=99.15%, sys=0.54%, ctx=19, majf=0, minf=58 00:31:38.444 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139383: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=423, BW=1696KiB/s (1737kB/s)(16.6MiB/10001msec) 00:31:38.444 slat (nsec): min=8733, max=98577, avg=51505.11, stdev=14481.81 00:31:38.444 clat (usec): min=15321, max=59026, avg=37298.17, stdev=1328.18 00:31:38.444 lat (usec): min=15335, max=59077, avg=37349.67, stdev=1327.56 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.444 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.444 | 99.99th=[58983] 00:31:38.444 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.444 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.444 lat (msec) : 20=0.05%, 50=99.58%, 100=0.38% 00:31:38.444 cpu : usr=98.49%, sys=0.86%, ctx=47, majf=0, minf=39 00:31:38.444 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139384: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10002msec) 00:31:38.444 slat (nsec): min=7783, max=99766, avg=48468.19, stdev=17193.51 00:31:38.444 clat (usec): min=35342, max=73289, avg=37495.95, stdev=2216.86 00:31:38.444 lat (usec): min=35405, max=73357, avg=37544.42, stdev=2214.79 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.444 | 99.00th=[38536], 99.50th=[39060], 99.90th=[72877], 99.95th=[72877], 00:31:38.444 | 99.99th=[72877] 00:31:38.444 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1684.21, stdev=64.19, samples=19 00:31:38.444 iops : min= 384, max= 448, avg=421.05, stdev=16.05, samples=19 00:31:38.444 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.444 cpu : usr=98.46%, sys=0.84%, ctx=72, majf=0, minf=61 00:31:38.444 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139385: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10001msec) 00:31:38.444 slat (usec): min=9, max=116, avg=51.95, stdev=15.28 00:31:38.444 clat (usec): min=27659, max=83923, avg=37411.40, stdev=2329.10 00:31:38.444 lat (usec): min=27670, max=83964, avg=37463.35, stdev=2328.34 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.444 | 99.00th=[38536], 99.50th=[38536], 99.90th=[72877], 99.95th=[72877], 00:31:38.444 | 99.99th=[84411] 00:31:38.444 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1684.21, stdev=64.19, samples=19 00:31:38.444 iops : min= 384, max= 448, avg=421.05, stdev=16.05, samples=19 00:31:38.444 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.444 cpu : usr=98.75%, sys=0.87%, ctx=35, majf=0, minf=48 00:31:38.444 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139387: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=435, BW=1742KiB/s (1784kB/s)(17.1MiB/10031msec) 00:31:38.444 slat (usec): min=5, max=150, avg=50.32, stdev=21.40 00:31:38.444 clat (usec): min=1741, max=43722, avg=36342.54, stdev=5656.39 00:31:38.444 lat (usec): min=1754, max=43742, avg=36392.86, stdev=5661.84 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[ 1975], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:38.444 | 99.00th=[39060], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:31:38.444 | 99.99th=[43779] 00:31:38.444 bw ( KiB/s): min= 1664, max= 2560, per=4.28%, avg=1740.80, stdev=200.89, samples=20 00:31:38.444 iops : min= 416, max= 640, avg=435.20, stdev=50.22, samples=20 00:31:38.444 lat (msec) : 2=1.05%, 4=0.78%, 10=1.10%, 50=97.07% 00:31:38.444 cpu : usr=98.67%, sys=0.88%, ctx=15, majf=0, minf=72 00:31:38.444 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.444 filename0: (groupid=0, jobs=1): err= 0: pid=4139388: Mon Jul 15 13:01:29 2024 00:31:38.444 read: IOPS=428, BW=1713KiB/s (1754kB/s)(16.8MiB/10028msec) 00:31:38.444 slat (usec): min=9, max=126, avg=38.40, stdev=21.84 00:31:38.444 clat (usec): min=6762, max=42086, avg=37077.01, stdev=3261.53 00:31:38.444 lat (usec): min=6772, max=42110, avg=37115.41, stdev=3261.63 00:31:38.444 clat percentiles (usec): 00:31:38.444 | 1.00th=[15270], 5.00th=[35390], 10.00th=[36963], 20.00th=[36963], 00:31:38.444 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.444 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:38.444 | 99.00th=[40633], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:31:38.444 | 99.99th=[42206] 00:31:38.444 bw ( KiB/s): min= 1664, max= 1920, per=4.21%, avg=1711.20, stdev=74.41, samples=20 00:31:38.444 iops : min= 416, max= 480, avg=427.80, stdev=18.60, samples=20 00:31:38.444 lat (msec) : 10=0.88%, 20=0.37%, 50=98.74% 00:31:38.444 cpu : usr=98.64%, sys=0.93%, ctx=50, majf=0, minf=50 00:31:38.444 IO depths : 1=4.8%, 2=10.7%, 4=23.5%, 8=53.3%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:38.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.444 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename0: (groupid=0, jobs=1): err= 0: pid=4139389: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=422, BW=1690KiB/s (1731kB/s)(16.5MiB/10010msec) 00:31:38.445 slat (usec): min=9, max=151, avg=49.21, stdev=20.28 00:31:38.445 clat (usec): min=15294, max=96137, avg=37441.06, stdev=4172.02 00:31:38.445 lat (usec): min=15332, max=96162, avg=37490.27, stdev=4171.18 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[32900], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.445 | 99.00th=[41157], 99.50th=[57934], 99.90th=[95945], 99.95th=[95945], 00:31:38.445 | 99.99th=[95945] 00:31:38.445 bw ( KiB/s): min= 1456, max= 1792, per=4.15%, avg=1686.74, stdev=80.06, samples=19 00:31:38.445 iops : min= 364, max= 448, avg=421.68, stdev=20.01, samples=19 00:31:38.445 lat (msec) : 20=0.38%, 50=99.01%, 100=0.61% 00:31:38.445 cpu : usr=98.96%, sys=0.60%, ctx=13, majf=0, minf=51 00:31:38.445 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename0: (groupid=0, jobs=1): err= 0: pid=4139390: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10004msec) 00:31:38.445 slat (usec): min=7, max=139, avg=43.77, stdev=21.01 00:31:38.445 clat (usec): min=22858, max=50866, avg=37398.74, stdev=1237.79 00:31:38.445 lat (usec): min=22890, max=50881, avg=37442.51, stdev=1234.56 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.445 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.445 | 99.99th=[51119] 00:31:38.445 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.445 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.445 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.445 cpu : usr=98.69%, sys=0.90%, ctx=13, majf=0, minf=50 00:31:38.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139391: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10005msec) 00:31:38.445 slat (usec): min=13, max=174, avg=59.82, stdev=27.07 00:31:38.445 clat (usec): min=15308, max=59240, avg=37154.76, stdev=1350.46 00:31:38.445 lat (usec): min=15329, max=59269, avg=37214.59, stdev=1352.51 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 00:31:38.445 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[51119], 00:31:38.445 | 99.99th=[58983] 00:31:38.445 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.445 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.445 lat (msec) : 20=0.05%, 50=99.58%, 100=0.38% 00:31:38.445 cpu : usr=98.31%, sys=1.18%, ctx=13, majf=0, minf=34 00:31:38.445 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139392: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=422, BW=1690KiB/s (1731kB/s)(16.5MiB/10010msec) 00:31:38.445 slat (usec): min=6, max=142, avg=48.23, stdev=20.32 00:31:38.445 clat (msec): min=11, max=120, avg=37.40, stdev= 4.62 00:31:38.445 lat (msec): min=11, max=120, avg=37.45, stdev= 4.62 00:31:38.445 clat percentiles (msec): 00:31:38.445 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:31:38.445 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:31:38.445 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:31:38.445 | 99.00th=[ 39], 99.50th=[ 65], 99.90th=[ 99], 99.95th=[ 99], 00:31:38.445 | 99.99th=[ 122] 00:31:38.445 bw ( KiB/s): min= 1328, max= 1792, per=4.13%, avg=1680.00, stdev=102.73, samples=19 00:31:38.445 iops : min= 332, max= 448, avg=420.00, stdev=25.68, samples=19 00:31:38.445 lat (msec) : 20=0.76%, 50=98.68%, 100=0.52%, 250=0.05% 00:31:38.445 cpu : usr=98.90%, sys=0.67%, ctx=15, majf=0, minf=55 00:31:38.445 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139393: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10004msec) 00:31:38.445 slat (usec): min=6, max=143, avg=42.78, stdev=18.53 00:31:38.445 clat (usec): min=15348, max=90732, avg=37550.28, stdev=3563.49 00:31:38.445 lat (usec): min=15372, max=90750, avg=37593.06, stdev=3561.28 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.445 | 99.00th=[39060], 99.50th=[40633], 99.90th=[90702], 99.95th=[90702], 00:31:38.445 | 99.99th=[90702] 00:31:38.445 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:31:38.445 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:31:38.445 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:38.445 cpu : usr=98.73%, sys=0.85%, ctx=13, majf=0, minf=40 00:31:38.445 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139394: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=424, BW=1697KiB/s (1738kB/s)(16.6MiB/10008msec) 00:31:38.445 slat (usec): min=6, max=106, avg=44.67, stdev=20.05 00:31:38.445 clat (usec): min=10160, max=91307, avg=37303.65, stdev=3857.52 00:31:38.445 lat (usec): min=10169, max=91325, avg=37348.32, stdev=3857.57 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[23462], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.445 | 99.00th=[39060], 99.50th=[40109], 99.90th=[86508], 99.95th=[86508], 00:31:38.445 | 99.99th=[91751] 00:31:38.445 bw ( KiB/s): min= 1410, max= 1792, per=4.15%, avg=1686.84, stdev=87.82, samples=19 00:31:38.445 iops : min= 352, max= 448, avg=421.68, stdev=22.04, samples=19 00:31:38.445 lat (msec) : 20=0.89%, 50=98.73%, 100=0.38% 00:31:38.445 cpu : usr=98.76%, sys=0.80%, ctx=15, majf=0, minf=60 00:31:38.445 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139395: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.7MiB/10011msec) 00:31:38.445 slat (usec): min=6, max=146, avg=47.10, stdev=20.64 00:31:38.445 clat (usec): min=11317, max=96507, avg=37036.41, stdev=4213.17 00:31:38.445 lat (usec): min=11352, max=96524, avg=37083.51, stdev=4214.47 00:31:38.445 clat percentiles (usec): 00:31:38.445 | 1.00th=[21365], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:31:38.445 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.445 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.445 | 99.00th=[42206], 99.50th=[45876], 99.90th=[85459], 99.95th=[85459], 00:31:38.445 | 99.99th=[96994] 00:31:38.445 bw ( KiB/s): min= 1552, max= 1808, per=4.18%, avg=1699.37, stdev=71.27, samples=19 00:31:38.445 iops : min= 388, max= 452, avg=424.84, stdev=17.82, samples=19 00:31:38.445 lat (msec) : 20=0.75%, 50=98.88%, 100=0.37% 00:31:38.445 cpu : usr=98.78%, sys=0.80%, ctx=19, majf=0, minf=41 00:31:38.445 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:38.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.445 issued rwts: total=4276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.445 filename1: (groupid=0, jobs=1): err= 0: pid=4139396: Mon Jul 15 13:01:29 2024 00:31:38.445 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10004msec) 00:31:38.445 slat (usec): min=9, max=145, avg=35.65, stdev=23.43 00:31:38.445 clat (usec): min=22774, max=50571, avg=37471.66, stdev=1223.18 00:31:38.445 lat (usec): min=22818, max=50597, avg=37507.31, stdev=1219.01 00:31:38.445 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:31:38.446 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.446 | 99.99th=[50594] 00:31:38.446 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.446 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.446 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.446 cpu : usr=98.35%, sys=1.23%, ctx=24, majf=0, minf=43 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename1: (groupid=0, jobs=1): err= 0: pid=4139397: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=424, BW=1697KiB/s (1737kB/s)(16.6MiB/10006msec) 00:31:38.446 slat (usec): min=9, max=176, avg=51.78, stdev=26.70 00:31:38.446 clat (msec): min=11, max=121, avg=37.21, stdev= 5.04 00:31:38.446 lat (msec): min=11, max=122, avg=37.26, stdev= 5.04 00:31:38.446 clat percentiles (msec): 00:31:38.446 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:31:38.446 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:31:38.446 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:31:38.446 | 99.00th=[ 51], 99.50th=[ 63], 99.90th=[ 96], 99.95th=[ 96], 00:31:38.446 | 99.99th=[ 123] 00:31:38.446 bw ( KiB/s): min= 1328, max= 1856, per=4.14%, avg=1685.89, stdev=107.62, samples=19 00:31:38.446 iops : min= 332, max= 464, avg=421.47, stdev=26.91, samples=19 00:31:38.446 lat (msec) : 20=0.75%, 50=98.07%, 100=1.13%, 250=0.05% 00:31:38.446 cpu : usr=98.24%, sys=1.26%, ctx=14, majf=0, minf=38 00:31:38.446 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename1: (groupid=0, jobs=1): err= 0: pid=4139399: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10005msec) 00:31:38.446 slat (usec): min=8, max=142, avg=48.08, stdev=20.55 00:31:38.446 clat (usec): min=22919, max=50926, avg=37358.00, stdev=1240.70 00:31:38.446 lat (usec): min=22947, max=50943, avg=37406.08, stdev=1238.15 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.446 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[51119], 99.95th=[51119], 00:31:38.446 | 99.99th=[51119] 00:31:38.446 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.446 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.446 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.446 cpu : usr=98.80%, sys=0.78%, ctx=18, majf=0, minf=49 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename2: (groupid=0, jobs=1): err= 0: pid=4139400: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10010msec) 00:31:38.446 slat (usec): min=6, max=148, avg=49.97, stdev=19.79 00:31:38.446 clat (usec): min=11521, max=87936, avg=37298.23, stdev=3730.28 00:31:38.446 lat (usec): min=11535, max=87954, avg=37348.20, stdev=3729.86 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.446 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[37487], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[87557], 99.95th=[87557], 00:31:38.446 | 99.99th=[87557] 00:31:38.446 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:31:38.446 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:31:38.446 lat (msec) : 20=0.75%, 50=98.87%, 100=0.38% 00:31:38.446 cpu : usr=98.79%, sys=0.79%, ctx=18, majf=0, minf=37 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename2: (groupid=0, jobs=1): err= 0: pid=4139401: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10006msec) 00:31:38.446 slat (usec): min=9, max=134, avg=46.57, stdev=18.60 00:31:38.446 clat (usec): min=15320, max=96919, avg=37504.67, stdev=3700.63 00:31:38.446 lat (usec): min=15366, max=96944, avg=37551.24, stdev=3698.64 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.446 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.446 | 99.00th=[39060], 99.50th=[40633], 99.90th=[92799], 99.95th=[92799], 00:31:38.446 | 99.99th=[96994] 00:31:38.446 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:31:38.446 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:31:38.446 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:38.446 cpu : usr=98.76%, sys=0.82%, ctx=15, majf=0, minf=44 00:31:38.446 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename2: (groupid=0, jobs=1): err= 0: pid=4139402: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10004msec) 00:31:38.446 slat (usec): min=9, max=146, avg=38.50, stdev=22.70 00:31:38.446 clat (usec): min=22812, max=50656, avg=37446.03, stdev=1269.75 00:31:38.446 lat (usec): min=22835, max=50690, avg=37484.53, stdev=1266.41 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:31:38.446 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.446 | 99.99th=[50594] 00:31:38.446 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.446 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.446 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.446 cpu : usr=98.49%, sys=1.09%, ctx=20, majf=0, minf=42 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename2: (groupid=0, jobs=1): err= 0: pid=4139403: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10004msec) 00:31:38.446 slat (usec): min=10, max=140, avg=42.45, stdev=22.13 00:31:38.446 clat (usec): min=22849, max=50738, avg=37412.29, stdev=1231.84 00:31:38.446 lat (usec): min=22882, max=50755, avg=37454.74, stdev=1228.42 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.446 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.446 | 99.99th=[50594] 00:31:38.446 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.446 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.446 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.446 cpu : usr=98.92%, sys=0.65%, ctx=16, majf=0, minf=37 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.446 filename2: (groupid=0, jobs=1): err= 0: pid=4139404: Mon Jul 15 13:01:29 2024 00:31:38.446 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10007msec) 00:31:38.446 slat (usec): min=9, max=127, avg=18.22, stdev=12.11 00:31:38.446 clat (usec): min=11676, max=80938, avg=37443.78, stdev=3663.98 00:31:38.446 lat (usec): min=11687, max=80963, avg=37462.00, stdev=3663.80 00:31:38.446 clat percentiles (usec): 00:31:38.446 | 1.00th=[16188], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:38.446 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.446 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.446 | 99.00th=[38536], 99.50th=[39060], 99.90th=[81265], 99.95th=[81265], 00:31:38.446 | 99.99th=[81265] 00:31:38.446 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1690.95, stdev=68.52, samples=19 00:31:38.446 iops : min= 384, max= 448, avg=422.74, stdev=17.13, samples=19 00:31:38.446 lat (msec) : 20=1.13%, 50=98.50%, 100=0.38% 00:31:38.446 cpu : usr=98.64%, sys=0.94%, ctx=16, majf=0, minf=66 00:31:38.446 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.446 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.447 filename2: (groupid=0, jobs=1): err= 0: pid=4139405: Mon Jul 15 13:01:29 2024 00:31:38.447 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10002msec) 00:31:38.447 slat (usec): min=6, max=147, avg=49.33, stdev=20.43 00:31:38.447 clat (usec): min=15053, max=88244, avg=37414.27, stdev=3436.93 00:31:38.447 lat (usec): min=15086, max=88262, avg=37463.60, stdev=3435.11 00:31:38.447 clat percentiles (usec): 00:31:38.447 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.447 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:38.447 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.447 | 99.00th=[39060], 99.50th=[40633], 99.90th=[88605], 99.95th=[88605], 00:31:38.447 | 99.99th=[88605] 00:31:38.447 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=85.52, samples=19 00:31:38.447 iops : min= 352, max= 448, avg=421.05, stdev=21.38, samples=19 00:31:38.447 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:38.447 cpu : usr=98.11%, sys=1.22%, ctx=89, majf=0, minf=40 00:31:38.447 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:38.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.447 filename2: (groupid=0, jobs=1): err= 0: pid=4139406: Mon Jul 15 13:01:29 2024 00:31:38.447 read: IOPS=427, BW=1711KiB/s (1752kB/s)(16.8MiB/10023msec) 00:31:38.447 slat (usec): min=6, max=139, avg=29.33, stdev=23.03 00:31:38.447 clat (usec): min=6689, max=42136, avg=37171.68, stdev=3213.51 00:31:38.447 lat (usec): min=6698, max=42151, avg=37201.01, stdev=3212.44 00:31:38.447 clat percentiles (usec): 00:31:38.447 | 1.00th=[ 8848], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:31:38.447 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.447 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:38.447 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:31:38.447 | 99.99th=[42206] 00:31:38.447 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1708.80, stdev=75.15, samples=20 00:31:38.447 iops : min= 416, max= 480, avg=427.20, stdev=18.79, samples=20 00:31:38.447 lat (msec) : 10=1.12%, 50=98.88% 00:31:38.447 cpu : usr=98.21%, sys=1.38%, ctx=26, majf=0, minf=63 00:31:38.447 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:38.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.447 filename2: (groupid=0, jobs=1): err= 0: pid=4139407: Mon Jul 15 13:01:29 2024 00:31:38.447 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10004msec) 00:31:38.447 slat (usec): min=10, max=139, avg=43.98, stdev=21.17 00:31:38.447 clat (usec): min=22837, max=50842, avg=37396.98, stdev=1236.31 00:31:38.447 lat (usec): min=22857, max=50860, avg=37440.96, stdev=1233.43 00:31:38.447 clat percentiles (usec): 00:31:38.447 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:38.447 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:38.447 | 70.00th=[37487], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:31:38.447 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:31:38.447 | 99.99th=[50594] 00:31:38.447 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:31:38.447 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:31:38.447 lat (msec) : 50=99.62%, 100=0.38% 00:31:38.447 cpu : usr=98.88%, sys=0.69%, ctx=23, majf=0, minf=43 00:31:38.447 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:38.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.447 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.447 00:31:38.447 Run status group 0 (all jobs): 00:31:38.447 READ: bw=39.7MiB/s (41.7MB/s), 1689KiB/s-1742KiB/s (1729kB/s-1784kB/s), io=398MiB (418MB), run=10001-10031msec 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 bdev_null0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 [2024-07-15 13:01:29.327460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.447 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.447 bdev_null1 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:38.448 { 00:31:38.448 "params": { 00:31:38.448 "name": "Nvme$subsystem", 00:31:38.448 "trtype": "$TEST_TRANSPORT", 00:31:38.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.448 "adrfam": "ipv4", 00:31:38.448 "trsvcid": "$NVMF_PORT", 00:31:38.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.448 "hdgst": ${hdgst:-false}, 00:31:38.448 "ddgst": ${ddgst:-false} 00:31:38.448 }, 00:31:38.448 "method": "bdev_nvme_attach_controller" 00:31:38.448 } 00:31:38.448 EOF 00:31:38.448 )") 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:38.448 { 00:31:38.448 "params": { 00:31:38.448 "name": "Nvme$subsystem", 00:31:38.448 "trtype": "$TEST_TRANSPORT", 00:31:38.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.448 "adrfam": "ipv4", 00:31:38.448 "trsvcid": "$NVMF_PORT", 00:31:38.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.448 "hdgst": ${hdgst:-false}, 00:31:38.448 "ddgst": ${ddgst:-false} 00:31:38.448 }, 00:31:38.448 "method": "bdev_nvme_attach_controller" 00:31:38.448 } 00:31:38.448 EOF 00:31:38.448 )") 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:38.448 "params": { 00:31:38.448 "name": "Nvme0", 00:31:38.448 "trtype": "tcp", 00:31:38.448 "traddr": "10.0.0.2", 00:31:38.448 "adrfam": "ipv4", 00:31:38.448 "trsvcid": "4420", 00:31:38.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.448 "hdgst": false, 00:31:38.448 "ddgst": false 00:31:38.448 }, 00:31:38.448 "method": "bdev_nvme_attach_controller" 00:31:38.448 },{ 00:31:38.448 "params": { 00:31:38.448 "name": "Nvme1", 00:31:38.448 "trtype": "tcp", 00:31:38.448 "traddr": "10.0.0.2", 00:31:38.448 "adrfam": "ipv4", 00:31:38.448 "trsvcid": "4420", 00:31:38.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:38.448 "hdgst": false, 00:31:38.448 "ddgst": false 00:31:38.448 }, 00:31:38.448 "method": "bdev_nvme_attach_controller" 00:31:38.448 }' 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:38.448 13:01:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.448 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.448 ... 00:31:38.448 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.448 ... 00:31:38.448 fio-3.35 00:31:38.448 Starting 4 threads 00:31:38.448 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.016 00:31:45.016 filename0: (groupid=0, jobs=1): err= 0: pid=4141485: Mon Jul 15 13:01:35 2024 00:31:45.016 read: IOPS=1685, BW=13.2MiB/s (13.8MB/s)(65.9MiB/5002msec) 00:31:45.016 slat (nsec): min=7679, max=83569, avg=21397.59, stdev=14692.36 00:31:45.016 clat (usec): min=858, max=8374, avg=4676.13, stdev=696.89 00:31:45.016 lat (usec): min=873, max=8389, avg=4697.53, stdev=694.97 00:31:45.016 clat percentiles (usec): 00:31:45.016 | 1.00th=[ 3195], 5.00th=[ 3884], 10.00th=[ 4113], 20.00th=[ 4359], 00:31:45.016 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:31:45.016 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5407], 95.00th=[ 6194], 00:31:45.016 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 7898], 99.95th=[ 8160], 00:31:45.016 | 99.99th=[ 8356] 00:31:45.016 bw ( KiB/s): min=13248, max=13872, per=24.01%, avg=13445.33, stdev=181.90, samples=9 00:31:45.016 iops : min= 1656, max= 1734, avg=1680.67, stdev=22.74, samples=9 00:31:45.016 lat (usec) : 1000=0.02% 00:31:45.016 lat (msec) : 2=0.18%, 4=6.61%, 10=93.19% 00:31:45.016 cpu : usr=97.32%, sys=2.24%, ctx=10, majf=0, minf=60 00:31:45.017 IO depths : 1=0.3%, 2=7.3%, 4=65.7%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 issued rwts: total=8432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.017 filename0: (groupid=0, jobs=1): err= 0: pid=4141486: Mon Jul 15 13:01:35 2024 00:31:45.017 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.8MiB/5005msec) 00:31:45.017 slat (nsec): min=9183, max=81868, avg=20678.51, stdev=14205.90 00:31:45.017 clat (usec): min=756, max=8769, avg=4355.27, stdev=686.68 00:31:45.017 lat (usec): min=767, max=8792, avg=4375.94, stdev=686.45 00:31:45.017 clat percentiles (usec): 00:31:45.017 | 1.00th=[ 2704], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 3851], 00:31:45.017 | 30.00th=[ 4080], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:45.017 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5407], 00:31:45.017 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7963], 00:31:45.017 | 99.99th=[ 8717] 00:31:45.017 bw ( KiB/s): min=13488, max=15360, per=25.85%, avg=14478.40, stdev=621.30, samples=10 00:31:45.017 iops : min= 1686, max= 1920, avg=1809.80, stdev=77.66, samples=10 00:31:45.017 lat (usec) : 1000=0.01% 00:31:45.017 lat (msec) : 2=0.34%, 4=25.74%, 10=73.91% 00:31:45.017 cpu : usr=97.54%, sys=2.02%, ctx=11, majf=0, minf=119 00:31:45.017 IO depths : 1=0.5%, 2=9.3%, 4=62.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 issued rwts: total=9057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.017 filename1: (groupid=0, jobs=1): err= 0: pid=4141487: Mon Jul 15 13:01:35 2024 00:31:45.017 read: IOPS=1805, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5001msec) 00:31:45.017 slat (usec): min=9, max=122, avg=20.10, stdev=11.24 00:31:45.017 clat (usec): min=1351, max=8333, avg=4370.27, stdev=626.11 00:31:45.017 lat (usec): min=1383, max=8345, avg=4390.36, stdev=626.86 00:31:45.017 clat percentiles (usec): 00:31:45.017 | 1.00th=[ 2802], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3884], 00:31:45.017 | 30.00th=[ 4146], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:45.017 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5407], 00:31:45.017 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 7439], 99.95th=[ 7832], 00:31:45.017 | 99.99th=[ 8356] 00:31:45.017 bw ( KiB/s): min=13808, max=15424, per=25.80%, avg=14449.78, stdev=462.10, samples=9 00:31:45.017 iops : min= 1726, max= 1928, avg=1806.22, stdev=57.76, samples=9 00:31:45.017 lat (msec) : 2=0.18%, 4=23.19%, 10=76.63% 00:31:45.017 cpu : usr=97.32%, sys=2.26%, ctx=14, majf=0, minf=111 00:31:45.017 IO depths : 1=0.3%, 2=8.4%, 4=62.1%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 issued rwts: total=9030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.017 filename1: (groupid=0, jobs=1): err= 0: pid=4141488: Mon Jul 15 13:01:35 2024 00:31:45.017 read: IOPS=1703, BW=13.3MiB/s (14.0MB/s)(66.6MiB/5001msec) 00:31:45.017 slat (nsec): min=7550, max=92116, avg=27862.21, stdev=12612.29 00:31:45.017 clat (usec): min=1013, max=8285, avg=4608.86, stdev=563.25 00:31:45.017 lat (usec): min=1028, max=8335, avg=4636.72, stdev=563.81 00:31:45.017 clat percentiles (usec): 00:31:45.017 | 1.00th=[ 3228], 5.00th=[ 3884], 10.00th=[ 4146], 20.00th=[ 4293], 00:31:45.017 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:31:45.017 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5211], 95.00th=[ 5538], 00:31:45.017 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8094], 00:31:45.017 | 99.99th=[ 8291] 00:31:45.017 bw ( KiB/s): min=13232, max=14348, per=24.33%, avg=13626.22, stdev=326.70, samples=9 00:31:45.017 iops : min= 1654, max= 1793, avg=1703.22, stdev=40.70, samples=9 00:31:45.017 lat (msec) : 2=0.18%, 4=6.38%, 10=93.44% 00:31:45.017 cpu : usr=95.86%, sys=3.28%, ctx=53, majf=0, minf=63 00:31:45.017 IO depths : 1=0.3%, 2=8.5%, 4=64.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.017 issued rwts: total=8520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:45.017 00:31:45.017 Run status group 0 (all jobs): 00:31:45.017 READ: bw=54.7MiB/s (57.3MB/s), 13.2MiB/s-14.1MiB/s (13.8MB/s-14.8MB/s), io=274MiB (287MB), run=5001-5005msec 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 00:31:45.017 real 0m24.892s 00:31:45.017 user 5m8.000s 00:31:45.017 sys 0m4.345s 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 ************************************ 00:31:45.017 END TEST fio_dif_rand_params 00:31:45.017 ************************************ 00:31:45.017 13:01:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:45.017 13:01:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:45.017 13:01:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:45.017 13:01:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.017 13:01:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 ************************************ 00:31:45.017 START TEST fio_dif_digest 00:31:45.017 ************************************ 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 bdev_null0 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.017 [2024-07-15 13:01:36.069956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.017 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:45.018 { 00:31:45.018 "params": { 00:31:45.018 "name": "Nvme$subsystem", 00:31:45.018 "trtype": "$TEST_TRANSPORT", 00:31:45.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.018 "adrfam": "ipv4", 00:31:45.018 "trsvcid": "$NVMF_PORT", 00:31:45.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.018 "hdgst": ${hdgst:-false}, 00:31:45.018 "ddgst": ${ddgst:-false} 00:31:45.018 }, 00:31:45.018 "method": "bdev_nvme_attach_controller" 00:31:45.018 } 00:31:45.018 EOF 00:31:45.018 )") 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:45.018 "params": { 00:31:45.018 "name": "Nvme0", 00:31:45.018 "trtype": "tcp", 00:31:45.018 "traddr": "10.0.0.2", 00:31:45.018 "adrfam": "ipv4", 00:31:45.018 "trsvcid": "4420", 00:31:45.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.018 "hdgst": true, 00:31:45.018 "ddgst": true 00:31:45.018 }, 00:31:45.018 "method": "bdev_nvme_attach_controller" 00:31:45.018 }' 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:45.018 13:01:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.018 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:45.018 ... 00:31:45.018 fio-3.35 00:31:45.018 Starting 3 threads 00:31:45.018 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.211 00:31:57.211 filename0: (groupid=0, jobs=1): err= 0: pid=4142738: Mon Jul 15 13:01:47 2024 00:31:57.211 read: IOPS=182, BW=22.9MiB/s (24.0MB/s)(230MiB/10050msec) 00:31:57.211 slat (nsec): min=9752, max=53743, avg=24396.16, stdev=8192.86 00:31:57.211 clat (usec): min=9585, max=55257, avg=16352.35, stdev=1624.08 00:31:57.211 lat (usec): min=9603, max=55276, avg=16376.75, stdev=1624.02 00:31:57.211 clat percentiles (usec): 00:31:57.211 | 1.00th=[13698], 5.00th=[14615], 10.00th=[15139], 20.00th=[15533], 00:31:57.211 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16188], 60.00th=[16581], 00:31:57.211 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:31:57.211 | 99.00th=[18744], 99.50th=[19006], 99.90th=[52167], 99.95th=[55313], 00:31:57.211 | 99.99th=[55313] 00:31:57.211 bw ( KiB/s): min=22784, max=24320, per=32.78%, avg=23498.40, stdev=434.70, samples=20 00:31:57.211 iops : min= 178, max= 190, avg=183.55, stdev= 3.38, samples=20 00:31:57.211 lat (msec) : 10=0.05%, 20=99.78%, 50=0.05%, 100=0.11% 00:31:57.211 cpu : usr=96.35%, sys=3.02%, ctx=138, majf=0, minf=164 00:31:57.211 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.211 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.211 filename0: (groupid=0, jobs=1): err= 0: pid=4142740: Mon Jul 15 13:01:47 2024 00:31:57.211 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10050msec) 00:31:57.211 slat (nsec): min=6199, max=53004, avg=31052.97, stdev=6733.88 00:31:57.211 clat (usec): min=11584, max=62461, avg=15256.34, stdev=2423.36 00:31:57.211 lat (usec): min=11627, max=62476, avg=15287.39, stdev=2422.38 00:31:57.211 clat percentiles (usec): 00:31:57.211 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:31:57.211 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:31:57.211 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:31:57.211 | 99.00th=[17433], 99.50th=[17957], 99.90th=[62653], 99.95th=[62653], 00:31:57.211 | 99.99th=[62653] 00:31:57.211 bw ( KiB/s): min=23040, max=25856, per=35.10%, avg=25164.80, stdev=599.51, samples=20 00:31:57.211 iops : min= 180, max= 202, avg=196.60, stdev= 4.68, samples=20 00:31:57.211 lat (msec) : 20=99.75%, 100=0.25% 00:31:57.211 cpu : usr=95.71%, sys=3.86%, ctx=28, majf=0, minf=156 00:31:57.211 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.211 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.211 filename0: (groupid=0, jobs=1): err= 0: pid=4142741: Mon Jul 15 13:01:47 2024 00:31:57.211 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10049msec) 00:31:57.211 slat (nsec): min=5768, max=72239, avg=18798.63, stdev=8219.19 00:31:57.211 clat (usec): min=10587, max=53368, avg=16500.75, stdev=1592.91 00:31:57.211 lat (usec): min=10600, max=53394, avg=16519.55, stdev=1593.18 00:31:57.211 clat percentiles (usec): 00:31:57.211 | 1.00th=[13829], 5.00th=[14877], 10.00th=[15270], 20.00th=[15664], 00:31:57.211 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:31:57.211 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:31:57.211 | 99.00th=[19530], 99.50th=[19792], 99.90th=[49021], 99.95th=[53216], 00:31:57.211 | 99.99th=[53216] 00:31:57.211 bw ( KiB/s): min=22528, max=24320, per=32.48%, avg=23285.50, stdev=479.43, samples=20 00:31:57.211 iops : min= 176, max= 190, avg=181.90, stdev= 3.75, samples=20 00:31:57.211 lat (msec) : 20=99.51%, 50=0.44%, 100=0.05% 00:31:57.211 cpu : usr=97.57%, sys=2.12%, ctx=27, majf=0, minf=175 00:31:57.211 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.211 issued rwts: total=1822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.211 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.211 00:31:57.211 Run status group 0 (all jobs): 00:31:57.211 READ: bw=70.0MiB/s (73.4MB/s), 22.7MiB/s-24.5MiB/s (23.8MB/s-25.7MB/s), io=704MiB (738MB), run=10049-10050msec 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.211 00:31:57.211 real 0m11.342s 00:31:57.211 user 0m41.360s 00:31:57.211 sys 0m1.209s 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:57.211 13:01:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.211 ************************************ 00:31:57.211 END TEST fio_dif_digest 00:31:57.211 ************************************ 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:57.211 13:01:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:57.211 13:01:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:57.211 rmmod nvme_tcp 00:31:57.211 rmmod nvme_fabrics 00:31:57.211 rmmod nvme_keyring 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 4133399 ']' 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 4133399 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 4133399 ']' 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 4133399 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4133399 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4133399' 00:31:57.211 killing process with pid 4133399 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@967 -- # kill 4133399 00:31:57.211 13:01:47 nvmf_dif -- common/autotest_common.sh@972 -- # wait 4133399 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:57.211 13:01:47 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:58.589 Waiting for block devices as requested 00:31:58.589 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:31:58.589 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:58.589 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:58.589 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:59.107 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:59.107 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:59.107 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:59.366 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:59.366 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:59.366 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:59.625 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:59.625 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:59.625 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:59.625 13:01:51 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:59.625 13:01:51 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:59.625 13:01:51 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.625 13:01:51 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:59.625 13:01:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.625 13:01:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.625 13:01:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.160 13:01:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.160 00:32:02.160 real 1m14.540s 00:32:02.160 user 7m42.564s 00:32:02.160 sys 0m18.259s 00:32:02.160 13:01:53 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:02.160 13:01:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.160 ************************************ 00:32:02.160 END TEST nvmf_dif 00:32:02.160 ************************************ 00:32:02.160 13:01:53 -- common/autotest_common.sh@1142 -- # return 0 00:32:02.161 13:01:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:02.161 13:01:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:02.161 13:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.161 13:01:53 -- common/autotest_common.sh@10 -- # set +x 00:32:02.161 ************************************ 00:32:02.161 START TEST nvmf_abort_qd_sizes 00:32:02.161 ************************************ 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:02.161 * Looking for test storage... 00:32:02.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:02.161 13:01:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:07.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:07.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:07.437 Found net devices under 0000:af:00.0: cvl_0_0 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:07.437 Found net devices under 0000:af:00.1: cvl_0_1 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.437 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.438 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:32:07.698 00:32:07.698 --- 10.0.0.2 ping statistics --- 00:32:07.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.698 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:32:07.698 00:32:07.698 --- 10.0.0.1 ping statistics --- 00:32:07.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.698 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:07.698 13:01:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:10.987 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:10.987 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:11.598 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:11.598 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.598 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:11.598 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:11.598 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:11.599 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=4151013 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 4151013 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 4151013 ']' 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.873 13:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:11.873 [2024-07-15 13:02:03.585687] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:32:11.873 [2024-07-15 13:02:03.585746] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.873 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.873 [2024-07-15 13:02:03.670553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:11.873 [2024-07-15 13:02:03.762963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.873 [2024-07-15 13:02:03.763008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.873 [2024-07-15 13:02:03.763018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.873 [2024-07-15 13:02:03.763027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.873 [2024-07-15 13:02:03.763034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.873 [2024-07-15 13:02:03.763094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.873 [2024-07-15 13:02:03.763207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.873 [2024-07-15 13:02:03.763320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.873 [2024-07-15 13:02:03.763322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.807 13:02:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:12.807 ************************************ 00:32:12.807 START TEST spdk_target_abort 00:32:12.807 ************************************ 00:32:12.807 13:02:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:12.807 13:02:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:12.807 13:02:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:32:12.807 13:02:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.807 13:02:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.092 spdk_targetn1 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.092 [2024-07-15 13:02:07.477098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.092 [2024-07-15 13:02:07.517364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.092 13:02:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.092 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.396 Initializing NVMe Controllers 00:32:19.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:19.396 Initialization complete. Launching workers. 00:32:19.396 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6996, failed: 0 00:32:19.396 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1170, failed to submit 5826 00:32:19.396 success 732, unsuccess 438, failed 0 00:32:19.396 13:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:19.396 13:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.396 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.685 Initializing NVMe Controllers 00:32:22.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:22.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:22.685 Initialization complete. Launching workers. 00:32:22.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8421, failed: 0 00:32:22.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7209 00:32:22.685 success 320, unsuccess 892, failed 0 00:32:22.685 13:02:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:22.685 13:02:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:22.685 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.971 Initializing NVMe Controllers 00:32:25.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:25.971 Initialization complete. Launching workers. 00:32:25.971 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17756, failed: 0 00:32:25.971 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1961, failed to submit 15795 00:32:25.971 success 167, unsuccess 1794, failed 0 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.971 13:02:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4151013 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 4151013 ']' 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 4151013 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151013 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151013' 00:32:26.908 killing process with pid 4151013 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 4151013 00:32:26.908 13:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 4151013 00:32:27.167 00:32:27.167 real 0m14.381s 00:32:27.167 user 0m57.802s 00:32:27.167 sys 0m2.097s 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:27.167 ************************************ 00:32:27.167 END TEST spdk_target_abort 00:32:27.167 ************************************ 00:32:27.167 13:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:27.167 13:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:27.167 13:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.167 13:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.167 13:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:27.167 ************************************ 00:32:27.167 START TEST kernel_target_abort 00:32:27.167 ************************************ 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:27.167 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:27.427 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:27.427 13:02:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:29.966 Waiting for block devices as requested 00:32:29.966 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:29.966 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:30.225 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:30.225 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:30.225 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:30.485 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:30.485 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:30.485 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:30.744 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:30.744 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:30.744 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:30.744 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:31.003 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:31.003 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:31.003 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:31.262 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:31.262 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:31.262 No valid GPT data, bailing 00:32:31.262 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:31.520 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:31.521 00:32:31.521 Discovery Log Number of Records 2, Generation counter 2 00:32:31.521 =====Discovery Log Entry 0====== 00:32:31.521 trtype: tcp 00:32:31.521 adrfam: ipv4 00:32:31.521 subtype: current discovery subsystem 00:32:31.521 treq: not specified, sq flow control disable supported 00:32:31.521 portid: 1 00:32:31.521 trsvcid: 4420 00:32:31.521 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:31.521 traddr: 10.0.0.1 00:32:31.521 eflags: none 00:32:31.521 sectype: none 00:32:31.521 =====Discovery Log Entry 1====== 00:32:31.521 trtype: tcp 00:32:31.521 adrfam: ipv4 00:32:31.521 subtype: nvme subsystem 00:32:31.521 treq: not specified, sq flow control disable supported 00:32:31.521 portid: 1 00:32:31.521 trsvcid: 4420 00:32:31.521 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:31.521 traddr: 10.0.0.1 00:32:31.521 eflags: none 00:32:31.521 sectype: none 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.521 13:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.521 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.810 Initializing NVMe Controllers 00:32:34.810 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.810 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.810 Initialization complete. Launching workers. 00:32:34.810 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53709, failed: 0 00:32:34.810 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53709, failed to submit 0 00:32:34.810 success 0, unsuccess 53709, failed 0 00:32:34.810 13:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.810 13:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:34.810 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.242 Initializing NVMe Controllers 00:32:38.242 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:38.242 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:38.242 Initialization complete. Launching workers. 00:32:38.242 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87706, failed: 0 00:32:38.242 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22102, failed to submit 65604 00:32:38.242 success 0, unsuccess 22102, failed 0 00:32:38.242 13:02:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.242 13:02:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.242 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.770 Initializing NVMe Controllers 00:32:40.770 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.770 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.770 Initialization complete. Launching workers. 00:32:40.770 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84364, failed: 0 00:32:40.770 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21094, failed to submit 63270 00:32:40.770 success 0, unsuccess 21094, failed 0 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:40.770 13:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:43.302 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:43.560 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:43.560 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:43.561 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:44.498 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:44.498 00:32:44.498 real 0m17.326s 00:32:44.498 user 0m8.449s 00:32:44.498 sys 0m5.078s 00:32:44.498 13:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:44.498 13:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:44.498 ************************************ 00:32:44.498 END TEST kernel_target_abort 00:32:44.498 ************************************ 00:32:44.498 13:02:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:44.498 13:02:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:44.498 13:02:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:44.498 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:44.498 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.757 rmmod nvme_tcp 00:32:44.757 rmmod nvme_fabrics 00:32:44.757 rmmod nvme_keyring 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 4151013 ']' 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 4151013 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 4151013 ']' 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 4151013 00:32:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4151013) - No such process 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 4151013 is not found' 00:32:44.757 Process with pid 4151013 is not found 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:44.757 13:02:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:47.291 Waiting for block devices as requested 00:32:47.291 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:47.549 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:47.549 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:47.808 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:47.808 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:47.808 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:47.808 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:48.066 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:48.066 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:48.066 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:48.325 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:48.325 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:48.325 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:48.325 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:48.583 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:48.583 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:48.583 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:48.842 13:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.745 13:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:50.745 00:32:50.745 real 0m48.931s 00:32:50.745 user 1m10.694s 00:32:50.745 sys 0m15.834s 00:32:50.746 13:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.746 13:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.746 ************************************ 00:32:50.746 END TEST nvmf_abort_qd_sizes 00:32:50.746 ************************************ 00:32:50.746 13:02:42 -- common/autotest_common.sh@1142 -- # return 0 00:32:50.746 13:02:42 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:50.746 13:02:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:50.746 13:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.746 13:02:42 -- common/autotest_common.sh@10 -- # set +x 00:32:51.003 ************************************ 00:32:51.003 START TEST keyring_file 00:32:51.003 ************************************ 00:32:51.003 13:02:42 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:51.003 * Looking for test storage... 00:32:51.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:51.003 13:02:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:51.003 13:02:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.003 13:02:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:51.003 13:02:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.003 13:02:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.004 13:02:42 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.004 13:02:42 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.004 13:02:42 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.004 13:02:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.004 13:02:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.004 13:02:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.004 13:02:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:51.004 13:02:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QWPZ7LNPd3 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QWPZ7LNPd3 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QWPZ7LNPd3 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QWPZ7LNPd3 00:32:51.004 13:02:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ByceEcUXX5 00:32:51.004 13:02:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:51.004 13:02:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:51.262 13:02:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ByceEcUXX5 00:32:51.262 13:02:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ByceEcUXX5 00:32:51.262 13:02:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ByceEcUXX5 00:32:51.262 13:02:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=4160395 00:32:51.262 13:02:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:51.262 13:02:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4160395 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4160395 ']' 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.262 13:02:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:51.262 [2024-07-15 13:02:43.023577] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:32:51.262 [2024-07-15 13:02:43.023638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160395 ] 00:32:51.262 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.262 [2024-07-15 13:02:43.105368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.262 [2024-07-15 13:02:43.195062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.196 13:02:43 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.196 13:02:43 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:52.196 13:02:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:52.196 13:02:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.196 13:02:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:52.196 [2024-07-15 13:02:43.958977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.196 null0 00:32:52.196 [2024-07-15 13:02:43.991008] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:52.196 [2024-07-15 13:02:43.991329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:52.196 [2024-07-15 13:02:43.999024] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.196 13:02:44 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:52.196 [2024-07-15 13:02:44.011061] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:52.196 request: 00:32:52.196 { 00:32:52.196 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.196 "secure_channel": false, 00:32:52.196 "listen_address": { 00:32:52.196 "trtype": "tcp", 00:32:52.196 "traddr": "127.0.0.1", 00:32:52.196 "trsvcid": "4420" 00:32:52.196 }, 00:32:52.196 "method": "nvmf_subsystem_add_listener", 00:32:52.196 "req_id": 1 00:32:52.196 } 00:32:52.196 Got JSON-RPC error response 00:32:52.196 response: 00:32:52.196 { 00:32:52.196 "code": -32602, 00:32:52.196 "message": "Invalid parameters" 00:32:52.196 } 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:52.196 13:02:44 keyring_file -- keyring/file.sh@46 -- # bperfpid=4160423 00:32:52.196 13:02:44 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:52.196 13:02:44 keyring_file -- keyring/file.sh@48 -- # waitforlisten 4160423 /var/tmp/bperf.sock 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4160423 ']' 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.196 13:02:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:52.196 [2024-07-15 13:02:44.066765] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:32:52.196 [2024-07-15 13:02:44.066820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160423 ] 00:32:52.196 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.455 [2024-07-15 13:02:44.147029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.455 [2024-07-15 13:02:44.251485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.390 13:02:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.390 13:02:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:53.390 13:02:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:32:53.390 13:02:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:32:53.390 13:02:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ByceEcUXX5 00:32:53.390 13:02:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ByceEcUXX5 00:32:53.649 13:02:45 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:53.649 13:02:45 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:53.649 13:02:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.649 13:02:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.649 13:02:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.215 13:02:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.QWPZ7LNPd3 == \/\t\m\p\/\t\m\p\.\Q\W\P\Z\7\L\N\P\d\3 ]] 00:32:54.215 13:02:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:54.215 13:02:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:54.215 13:02:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.215 13:02:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:54.215 13:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.473 13:02:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ByceEcUXX5 == \/\t\m\p\/\t\m\p\.\B\y\c\e\E\c\U\X\X\5 ]] 00:32:54.473 13:02:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:54.473 13:02:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:54.473 13:02:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:54.473 13:02:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.473 13:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.473 13:02:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.039 13:02:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:55.040 13:02:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:55.040 13:02:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:55.040 13:02:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.040 13:02:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.040 13:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.040 13:02:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.298 13:02:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:55.298 13:02:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.298 13:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.556 [2024-07-15 13:02:47.257372] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:55.556 nvme0n1 00:32:55.556 13:02:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:55.556 13:02:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.556 13:02:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.556 13:02:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.556 13:02:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.556 13:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.815 13:02:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:55.815 13:02:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:55.815 13:02:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:55.815 13:02:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.815 13:02:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.815 13:02:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.815 13:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.074 13:02:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:56.074 13:02:47 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:56.074 Running I/O for 1 seconds... 00:32:57.449 00:32:57.449 Latency(us) 00:32:57.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.449 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:57.449 nvme0n1 : 1.01 9237.04 36.08 0.00 0.00 13807.54 6881.28 23592.96 00:32:57.449 =================================================================================================================== 00:32:57.449 Total : 9237.04 36.08 0.00 0.00 13807.54 6881.28 23592.96 00:32:57.449 0 00:32:57.449 13:02:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:57.449 13:02:49 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.449 13:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.707 13:02:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:57.707 13:02:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:57.707 13:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:57.707 13:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.707 13:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.707 13:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:57.707 13:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.965 13:02:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:57.965 13:02:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.965 13:02:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:57.965 13:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:58.223 [2024-07-15 13:02:50.003075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:58.223 [2024-07-15 13:02:50.003864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50ce0 (107): Transport endpoint is not connected 00:32:58.223 [2024-07-15 13:02:50.004855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50ce0 (9): Bad file descriptor 00:32:58.223 [2024-07-15 13:02:50.005854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:58.223 [2024-07-15 13:02:50.005870] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:58.223 [2024-07-15 13:02:50.005882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:58.223 request: 00:32:58.223 { 00:32:58.223 "name": "nvme0", 00:32:58.223 "trtype": "tcp", 00:32:58.223 "traddr": "127.0.0.1", 00:32:58.223 "adrfam": "ipv4", 00:32:58.223 "trsvcid": "4420", 00:32:58.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:58.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:58.223 "prchk_reftag": false, 00:32:58.223 "prchk_guard": false, 00:32:58.223 "hdgst": false, 00:32:58.223 "ddgst": false, 00:32:58.223 "psk": "key1", 00:32:58.223 "method": "bdev_nvme_attach_controller", 00:32:58.223 "req_id": 1 00:32:58.223 } 00:32:58.223 Got JSON-RPC error response 00:32:58.223 response: 00:32:58.223 { 00:32:58.223 "code": -5, 00:32:58.223 "message": "Input/output error" 00:32:58.223 } 00:32:58.223 13:02:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:58.223 13:02:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:58.223 13:02:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:58.223 13:02:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:58.223 13:02:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:58.223 13:02:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:58.223 13:02:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.223 13:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.223 13:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.223 13:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.481 13:02:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:58.481 13:02:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:58.481 13:02:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.481 13:02:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.481 13:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.481 13:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.481 13:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.738 13:02:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:58.739 13:02:50 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:58.739 13:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:58.997 13:02:50 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:58.997 13:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:59.255 13:02:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:59.255 13:02:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:59.255 13:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.822 13:02:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:59.822 13:02:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.QWPZ7LNPd3 00:32:59.822 13:02:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:59.822 13:02:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:32:59.822 13:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:33:00.081 [2024-07-15 13:02:51.779547] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QWPZ7LNPd3': 0100660 00:33:00.081 [2024-07-15 13:02:51.779583] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:00.081 request: 00:33:00.081 { 00:33:00.081 "name": "key0", 00:33:00.081 "path": "/tmp/tmp.QWPZ7LNPd3", 00:33:00.081 "method": "keyring_file_add_key", 00:33:00.081 "req_id": 1 00:33:00.081 } 00:33:00.081 Got JSON-RPC error response 00:33:00.081 response: 00:33:00.081 { 00:33:00.081 "code": -1, 00:33:00.081 "message": "Operation not permitted" 00:33:00.081 } 00:33:00.081 13:02:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:00.081 13:02:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:00.081 13:02:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:00.081 13:02:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:00.081 13:02:51 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.QWPZ7LNPd3 00:33:00.081 13:02:51 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:33:00.081 13:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWPZ7LNPd3 00:33:00.340 13:02:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.QWPZ7LNPd3 00:33:00.340 13:02:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:00.340 13:02:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.340 13:02:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.340 13:02:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.340 13:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.340 13:02:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.600 13:02:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:00.600 13:02:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:00.600 13:02:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:00.600 13:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:00.860 [2024-07-15 13:02:52.774277] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QWPZ7LNPd3': No such file or directory 00:33:00.860 [2024-07-15 13:02:52.774312] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:00.860 [2024-07-15 13:02:52.774350] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:00.860 [2024-07-15 13:02:52.774361] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:00.860 [2024-07-15 13:02:52.774372] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:00.860 request: 00:33:00.860 { 00:33:00.860 "name": "nvme0", 00:33:00.860 "trtype": "tcp", 00:33:00.860 "traddr": "127.0.0.1", 00:33:00.860 "adrfam": "ipv4", 00:33:00.860 "trsvcid": "4420", 00:33:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.860 "prchk_reftag": false, 00:33:00.860 "prchk_guard": false, 00:33:00.860 "hdgst": false, 00:33:00.860 "ddgst": false, 00:33:00.860 "psk": "key0", 00:33:00.860 "method": "bdev_nvme_attach_controller", 00:33:00.860 "req_id": 1 00:33:00.860 } 00:33:00.860 Got JSON-RPC error response 00:33:00.860 response: 00:33:00.860 { 00:33:00.860 "code": -19, 00:33:00.860 "message": "No such device" 00:33:00.860 } 00:33:01.118 13:02:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:01.118 13:02:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.118 13:02:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.118 13:02:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.118 13:02:52 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:01.118 13:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:01.377 13:02:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OtW3lxRAgi 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:01.377 13:02:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OtW3lxRAgi 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OtW3lxRAgi 00:33:01.377 13:02:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.OtW3lxRAgi 00:33:01.377 13:02:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OtW3lxRAgi 00:33:01.377 13:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OtW3lxRAgi 00:33:01.635 13:02:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:01.635 13:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:01.894 nvme0n1 00:33:01.894 13:02:53 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:01.894 13:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:01.894 13:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.894 13:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.894 13:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.894 13:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.153 13:02:53 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:02.153 13:02:53 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.153 13:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:02.413 13:02:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:02.413 13:02:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:02.413 13:02:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.413 13:02:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.413 13:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.672 13:02:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:02.672 13:02:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:02.672 13:02:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:02.672 13:02:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:02.672 13:02:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.672 13:02:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.672 13:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.930 13:02:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:02.930 13:02:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:02.930 13:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:03.190 13:02:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:03.190 13:02:54 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:03.190 13:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.757 13:02:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:03.757 13:02:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OtW3lxRAgi 00:33:03.757 13:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OtW3lxRAgi 00:33:03.757 13:02:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ByceEcUXX5 00:33:03.758 13:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ByceEcUXX5 00:33:04.017 13:02:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.017 13:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.275 nvme0n1 00:33:04.534 13:02:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:04.534 13:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:04.793 13:02:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:04.793 "subsystems": [ 00:33:04.793 { 00:33:04.793 "subsystem": "keyring", 00:33:04.793 "config": [ 00:33:04.793 { 00:33:04.793 "method": "keyring_file_add_key", 00:33:04.793 "params": { 00:33:04.793 "name": "key0", 00:33:04.793 "path": "/tmp/tmp.OtW3lxRAgi" 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "keyring_file_add_key", 00:33:04.793 "params": { 00:33:04.793 "name": "key1", 00:33:04.793 "path": "/tmp/tmp.ByceEcUXX5" 00:33:04.793 } 00:33:04.793 } 00:33:04.793 ] 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "subsystem": "iobuf", 00:33:04.793 "config": [ 00:33:04.793 { 00:33:04.793 "method": "iobuf_set_options", 00:33:04.793 "params": { 00:33:04.793 "small_pool_count": 8192, 00:33:04.793 "large_pool_count": 1024, 00:33:04.793 "small_bufsize": 8192, 00:33:04.793 "large_bufsize": 135168 00:33:04.793 } 00:33:04.793 } 00:33:04.793 ] 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "subsystem": "sock", 00:33:04.793 "config": [ 00:33:04.793 { 00:33:04.793 "method": "sock_set_default_impl", 00:33:04.793 "params": { 00:33:04.793 "impl_name": "posix" 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "sock_impl_set_options", 00:33:04.793 "params": { 00:33:04.793 "impl_name": "ssl", 00:33:04.793 "recv_buf_size": 4096, 00:33:04.793 "send_buf_size": 4096, 00:33:04.793 "enable_recv_pipe": true, 00:33:04.793 "enable_quickack": false, 00:33:04.793 "enable_placement_id": 0, 00:33:04.793 "enable_zerocopy_send_server": true, 00:33:04.793 "enable_zerocopy_send_client": false, 00:33:04.793 "zerocopy_threshold": 0, 00:33:04.793 "tls_version": 0, 00:33:04.793 "enable_ktls": false 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "sock_impl_set_options", 00:33:04.793 "params": { 00:33:04.793 "impl_name": "posix", 00:33:04.793 "recv_buf_size": 2097152, 00:33:04.793 "send_buf_size": 2097152, 00:33:04.793 "enable_recv_pipe": true, 00:33:04.793 "enable_quickack": false, 00:33:04.793 "enable_placement_id": 0, 00:33:04.793 "enable_zerocopy_send_server": true, 00:33:04.793 "enable_zerocopy_send_client": false, 00:33:04.793 "zerocopy_threshold": 0, 00:33:04.793 "tls_version": 0, 00:33:04.793 "enable_ktls": false 00:33:04.793 } 00:33:04.793 } 00:33:04.793 ] 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "subsystem": "vmd", 00:33:04.793 "config": [] 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "subsystem": "accel", 00:33:04.793 "config": [ 00:33:04.793 { 00:33:04.793 "method": "accel_set_options", 00:33:04.793 "params": { 00:33:04.793 "small_cache_size": 128, 00:33:04.793 "large_cache_size": 16, 00:33:04.793 "task_count": 2048, 00:33:04.793 "sequence_count": 2048, 00:33:04.793 "buf_count": 2048 00:33:04.793 } 00:33:04.793 } 00:33:04.793 ] 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "subsystem": "bdev", 00:33:04.793 "config": [ 00:33:04.793 { 00:33:04.793 "method": "bdev_set_options", 00:33:04.793 "params": { 00:33:04.793 "bdev_io_pool_size": 65535, 00:33:04.793 "bdev_io_cache_size": 256, 00:33:04.793 "bdev_auto_examine": true, 00:33:04.793 "iobuf_small_cache_size": 128, 00:33:04.793 "iobuf_large_cache_size": 16 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "bdev_raid_set_options", 00:33:04.793 "params": { 00:33:04.793 "process_window_size_kb": 1024 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "bdev_iscsi_set_options", 00:33:04.793 "params": { 00:33:04.793 "timeout_sec": 30 00:33:04.793 } 00:33:04.793 }, 00:33:04.793 { 00:33:04.793 "method": "bdev_nvme_set_options", 00:33:04.793 "params": { 00:33:04.793 "action_on_timeout": "none", 00:33:04.793 "timeout_us": 0, 00:33:04.793 "timeout_admin_us": 0, 00:33:04.793 "keep_alive_timeout_ms": 10000, 00:33:04.793 "arbitration_burst": 0, 00:33:04.793 "low_priority_weight": 0, 00:33:04.793 "medium_priority_weight": 0, 00:33:04.793 "high_priority_weight": 0, 00:33:04.793 "nvme_adminq_poll_period_us": 10000, 00:33:04.793 "nvme_ioq_poll_period_us": 0, 00:33:04.793 "io_queue_requests": 512, 00:33:04.793 "delay_cmd_submit": true, 00:33:04.793 "transport_retry_count": 4, 00:33:04.793 "bdev_retry_count": 3, 00:33:04.793 "transport_ack_timeout": 0, 00:33:04.793 "ctrlr_loss_timeout_sec": 0, 00:33:04.793 "reconnect_delay_sec": 0, 00:33:04.793 "fast_io_fail_timeout_sec": 0, 00:33:04.793 "disable_auto_failback": false, 00:33:04.793 "generate_uuids": false, 00:33:04.793 "transport_tos": 0, 00:33:04.793 "nvme_error_stat": false, 00:33:04.793 "rdma_srq_size": 0, 00:33:04.793 "io_path_stat": false, 00:33:04.793 "allow_accel_sequence": false, 00:33:04.794 "rdma_max_cq_size": 0, 00:33:04.794 "rdma_cm_event_timeout_ms": 0, 00:33:04.794 "dhchap_digests": [ 00:33:04.794 "sha256", 00:33:04.794 "sha384", 00:33:04.794 "sha512" 00:33:04.794 ], 00:33:04.794 "dhchap_dhgroups": [ 00:33:04.794 "null", 00:33:04.794 "ffdhe2048", 00:33:04.794 "ffdhe3072", 00:33:04.794 "ffdhe4096", 00:33:04.794 "ffdhe6144", 00:33:04.794 "ffdhe8192" 00:33:04.794 ] 00:33:04.794 } 00:33:04.794 }, 00:33:04.794 { 00:33:04.794 "method": "bdev_nvme_attach_controller", 00:33:04.794 "params": { 00:33:04.794 "name": "nvme0", 00:33:04.794 "trtype": "TCP", 00:33:04.794 "adrfam": "IPv4", 00:33:04.794 "traddr": "127.0.0.1", 00:33:04.794 "trsvcid": "4420", 00:33:04.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.794 "prchk_reftag": false, 00:33:04.794 "prchk_guard": false, 00:33:04.794 "ctrlr_loss_timeout_sec": 0, 00:33:04.794 "reconnect_delay_sec": 0, 00:33:04.794 "fast_io_fail_timeout_sec": 0, 00:33:04.794 "psk": "key0", 00:33:04.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.794 "hdgst": false, 00:33:04.794 "ddgst": false 00:33:04.794 } 00:33:04.794 }, 00:33:04.794 { 00:33:04.794 "method": "bdev_nvme_set_hotplug", 00:33:04.794 "params": { 00:33:04.794 "period_us": 100000, 00:33:04.794 "enable": false 00:33:04.794 } 00:33:04.794 }, 00:33:04.794 { 00:33:04.794 "method": "bdev_wait_for_examine" 00:33:04.794 } 00:33:04.794 ] 00:33:04.794 }, 00:33:04.794 { 00:33:04.794 "subsystem": "nbd", 00:33:04.794 "config": [] 00:33:04.794 } 00:33:04.794 ] 00:33:04.794 }' 00:33:04.794 13:02:56 keyring_file -- keyring/file.sh@114 -- # killprocess 4160423 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4160423 ']' 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4160423 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4160423 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4160423' 00:33:04.794 killing process with pid 4160423 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@967 -- # kill 4160423 00:33:04.794 Received shutdown signal, test time was about 1.000000 seconds 00:33:04.794 00:33:04.794 Latency(us) 00:33:04.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.794 =================================================================================================================== 00:33:04.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.794 13:02:56 keyring_file -- common/autotest_common.sh@972 -- # wait 4160423 00:33:05.054 13:02:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=4162907 00:33:05.054 13:02:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 4162907 /var/tmp/bperf.sock 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4162907 ']' 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:05.054 13:02:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:05.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:05.054 13:02:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:05.054 "subsystems": [ 00:33:05.054 { 00:33:05.054 "subsystem": "keyring", 00:33:05.054 "config": [ 00:33:05.054 { 00:33:05.054 "method": "keyring_file_add_key", 00:33:05.054 "params": { 00:33:05.054 "name": "key0", 00:33:05.054 "path": "/tmp/tmp.OtW3lxRAgi" 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "keyring_file_add_key", 00:33:05.054 "params": { 00:33:05.054 "name": "key1", 00:33:05.054 "path": "/tmp/tmp.ByceEcUXX5" 00:33:05.054 } 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "iobuf", 00:33:05.054 "config": [ 00:33:05.054 { 00:33:05.054 "method": "iobuf_set_options", 00:33:05.054 "params": { 00:33:05.054 "small_pool_count": 8192, 00:33:05.054 "large_pool_count": 1024, 00:33:05.054 "small_bufsize": 8192, 00:33:05.054 "large_bufsize": 135168 00:33:05.054 } 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "sock", 00:33:05.054 "config": [ 00:33:05.054 { 00:33:05.054 "method": "sock_set_default_impl", 00:33:05.054 "params": { 00:33:05.054 "impl_name": "posix" 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "sock_impl_set_options", 00:33:05.054 "params": { 00:33:05.054 "impl_name": "ssl", 00:33:05.054 "recv_buf_size": 4096, 00:33:05.054 "send_buf_size": 4096, 00:33:05.054 "enable_recv_pipe": true, 00:33:05.054 "enable_quickack": false, 00:33:05.054 "enable_placement_id": 0, 00:33:05.054 "enable_zerocopy_send_server": true, 00:33:05.054 "enable_zerocopy_send_client": false, 00:33:05.054 "zerocopy_threshold": 0, 00:33:05.054 "tls_version": 0, 00:33:05.054 "enable_ktls": false 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "sock_impl_set_options", 00:33:05.054 "params": { 00:33:05.054 "impl_name": "posix", 00:33:05.054 "recv_buf_size": 2097152, 00:33:05.054 "send_buf_size": 2097152, 00:33:05.054 "enable_recv_pipe": true, 00:33:05.054 "enable_quickack": false, 00:33:05.054 "enable_placement_id": 0, 00:33:05.054 "enable_zerocopy_send_server": true, 00:33:05.054 "enable_zerocopy_send_client": false, 00:33:05.054 "zerocopy_threshold": 0, 00:33:05.054 "tls_version": 0, 00:33:05.054 "enable_ktls": false 00:33:05.054 } 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "vmd", 00:33:05.054 "config": [] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "accel", 00:33:05.054 "config": [ 00:33:05.054 { 00:33:05.054 "method": "accel_set_options", 00:33:05.054 "params": { 00:33:05.054 "small_cache_size": 128, 00:33:05.054 "large_cache_size": 16, 00:33:05.054 "task_count": 2048, 00:33:05.054 "sequence_count": 2048, 00:33:05.054 "buf_count": 2048 00:33:05.054 } 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "bdev", 00:33:05.054 "config": [ 00:33:05.054 { 00:33:05.054 "method": "bdev_set_options", 00:33:05.054 "params": { 00:33:05.054 "bdev_io_pool_size": 65535, 00:33:05.054 "bdev_io_cache_size": 256, 00:33:05.054 "bdev_auto_examine": true, 00:33:05.054 "iobuf_small_cache_size": 128, 00:33:05.054 "iobuf_large_cache_size": 16 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_raid_set_options", 00:33:05.054 "params": { 00:33:05.054 "process_window_size_kb": 1024 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_iscsi_set_options", 00:33:05.054 "params": { 00:33:05.054 "timeout_sec": 30 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_nvme_set_options", 00:33:05.054 "params": { 00:33:05.054 "action_on_timeout": "none", 00:33:05.054 "timeout_us": 0, 00:33:05.054 "timeout_admin_us": 0, 00:33:05.054 "keep_alive_timeout_ms": 10000, 00:33:05.054 "arbitration_burst": 0, 00:33:05.054 "low_priority_weight": 0, 00:33:05.054 "medium_priority_weight": 0, 00:33:05.054 "high_priority_weight": 0, 00:33:05.054 "nvme_adminq_poll_period_us": 10000, 00:33:05.054 "nvme_ioq_poll_period_us": 0, 00:33:05.054 "io_queue_requests": 512, 00:33:05.054 "delay_cmd_submit": true, 00:33:05.054 "transport_retry_count": 4, 00:33:05.054 "bdev_retry_count": 3, 00:33:05.054 "transport_ack_timeout": 0, 00:33:05.054 "ctrlr_loss_timeout_sec": 0, 00:33:05.054 "reconnect_delay_sec": 0, 00:33:05.054 "fast_io_fail_timeout_sec": 0, 00:33:05.054 "disable_auto_failback": false, 00:33:05.054 "generate_uuids": false, 00:33:05.054 "transport_tos": 0, 00:33:05.054 "nvme_error_stat": false, 00:33:05.054 "rdma_srq_size": 0, 00:33:05.054 "io_path_stat": false, 00:33:05.054 "allow_accel_sequence": false, 00:33:05.054 "rdma_max_cq_size": 0, 00:33:05.054 "rdma_cm_event_timeout_ms": 0, 00:33:05.054 "dhchap_digests": [ 00:33:05.054 "sha256", 00:33:05.054 "sha384", 00:33:05.054 "sha512" 00:33:05.054 ], 00:33:05.054 "dhchap_dhgroups": [ 00:33:05.054 "null", 00:33:05.054 "ffdhe2048", 00:33:05.054 "ffdhe3072", 00:33:05.054 "ffdhe4096", 00:33:05.054 "ffdhe6144", 00:33:05.054 "ffdhe8192" 00:33:05.054 ] 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_nvme_attach_controller", 00:33:05.054 "params": { 00:33:05.054 "name": "nvme0", 00:33:05.054 "trtype": "TCP", 00:33:05.054 "adrfam": "IPv4", 00:33:05.054 "traddr": "127.0.0.1", 00:33:05.054 "trsvcid": "4420", 00:33:05.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.054 "prchk_reftag": false, 00:33:05.054 "prchk_guard": false, 00:33:05.054 "ctrlr_loss_timeout_sec": 0, 00:33:05.054 "reconnect_delay_sec": 0, 00:33:05.054 "fast_io_fail_timeout_sec": 0, 00:33:05.054 "psk": "key0", 00:33:05.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.054 "hdgst": false, 00:33:05.054 "ddgst": false 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_nvme_set_hotplug", 00:33:05.054 "params": { 00:33:05.054 "period_us": 100000, 00:33:05.054 "enable": false 00:33:05.054 } 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "method": "bdev_wait_for_examine" 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }, 00:33:05.054 { 00:33:05.054 "subsystem": "nbd", 00:33:05.054 "config": [] 00:33:05.054 } 00:33:05.054 ] 00:33:05.054 }' 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:05.054 13:02:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:05.054 [2024-07-15 13:02:56.908724] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:33:05.054 [2024-07-15 13:02:56.908838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162907 ] 00:33:05.054 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.313 [2024-07-15 13:02:57.024365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.313 [2024-07-15 13:02:57.127065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.573 [2024-07-15 13:02:57.299871] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:06.142 13:02:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:06.142 13:02:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:06.142 13:02:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:06.142 13:02:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:06.142 13:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.710 13:02:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:06.710 13:02:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.710 13:02:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:06.710 13:02:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:06.710 13:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.970 13:02:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:06.970 13:02:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:06.970 13:02:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:06.970 13:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:07.228 13:02:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:07.228 13:02:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:07.228 13:02:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OtW3lxRAgi /tmp/tmp.ByceEcUXX5 00:33:07.228 13:02:59 keyring_file -- keyring/file.sh@20 -- # killprocess 4162907 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4162907 ']' 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4162907 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162907 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162907' 00:33:07.228 killing process with pid 4162907 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@967 -- # kill 4162907 00:33:07.228 Received shutdown signal, test time was about 1.000000 seconds 00:33:07.228 00:33:07.228 Latency(us) 00:33:07.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.228 =================================================================================================================== 00:33:07.228 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:07.228 13:02:59 keyring_file -- common/autotest_common.sh@972 -- # wait 4162907 00:33:07.487 13:02:59 keyring_file -- keyring/file.sh@21 -- # killprocess 4160395 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4160395 ']' 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4160395 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4160395 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4160395' 00:33:07.487 killing process with pid 4160395 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@967 -- # kill 4160395 00:33:07.487 [2024-07-15 13:02:59.424465] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:07.487 13:02:59 keyring_file -- common/autotest_common.sh@972 -- # wait 4160395 00:33:08.055 00:33:08.055 real 0m17.083s 00:33:08.055 user 0m43.201s 00:33:08.055 sys 0m3.305s 00:33:08.055 13:02:59 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.055 13:02:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:08.055 ************************************ 00:33:08.055 END TEST keyring_file 00:33:08.055 ************************************ 00:33:08.055 13:02:59 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.055 13:02:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:08.055 13:02:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:08.055 13:02:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:08.055 13:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.055 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:33:08.055 ************************************ 00:33:08.055 START TEST keyring_linux 00:33:08.055 ************************************ 00:33:08.055 13:02:59 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:08.055 * Looking for test storage... 00:33:08.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.055 13:02:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.055 13:02:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.055 13:02:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.055 13:02:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.055 13:02:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.055 13:02:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.055 13:02:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:08.055 13:02:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:08.055 13:02:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:08.055 13:02:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:08.055 13:02:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:08.314 /tmp/:spdk-test:key0 00:33:08.314 13:03:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:08.314 13:03:00 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:08.314 13:03:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:08.314 /tmp/:spdk-test:key1 00:33:08.314 13:03:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4163537 00:33:08.314 13:03:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4163537 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 4163537 ']' 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.314 13:03:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.314 13:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:08.314 [2024-07-15 13:03:00.156400] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:33:08.314 [2024-07-15 13:03:00.156461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163537 ] 00:33:08.314 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.314 [2024-07-15 13:03:00.237482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.574 [2024-07-15 13:03:00.330449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 [2024-07-15 13:03:00.609427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.833 null0 00:33:08.833 [2024-07-15 13:03:00.641458] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:08.833 [2024-07-15 13:03:00.641823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:08.833 959797523 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:08.833 472426901 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4163574 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:08.833 13:03:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4163574 /var/tmp/bperf.sock 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 4163574 ']' 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.833 13:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:08.833 [2024-07-15 13:03:00.716144] Starting SPDK v24.09-pre git sha1 32a79de81 / DPDK 24.03.0 initialization... 00:33:08.833 [2024-07-15 13:03:00.716200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163574 ] 00:33:08.833 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.091 [2024-07-15 13:03:00.796432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.092 [2024-07-15 13:03:00.900064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.025 13:03:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:10.025 13:03:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:10.025 13:03:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:10.025 13:03:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:10.283 13:03:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:10.283 13:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:10.851 13:03:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:10.851 13:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:10.851 [2024-07-15 13:03:02.710697] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:10.851 nvme0n1 00:33:11.110 13:03:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:11.110 13:03:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:11.110 13:03:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:11.110 13:03:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:11.110 13:03:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:11.110 13:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.110 13:03:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:11.110 13:03:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:11.371 13:03:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:11.371 13:03:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:11.371 13:03:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.371 13:03:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.371 13:03:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@25 -- # sn=959797523 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 959797523 == \9\5\9\7\9\7\5\2\3 ]] 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 959797523 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:11.630 13:03:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.630 Running I/O for 1 seconds... 00:33:12.565 00:33:12.565 Latency(us) 00:33:12.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.565 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:12.565 nvme0n1 : 1.01 9791.14 38.25 0.00 0.00 12999.37 9115.46 24188.74 00:33:12.565 =================================================================================================================== 00:33:12.565 Total : 9791.14 38.25 0.00 0.00 12999.37 9115.46 24188.74 00:33:12.565 0 00:33:12.565 13:03:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:12.565 13:03:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:12.824 13:03:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:12.824 13:03:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:12.824 13:03:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:12.824 13:03:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:12.824 13:03:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:12.824 13:03:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.082 13:03:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:13.082 13:03:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:13.082 13:03:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:13.082 13:03:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.082 13:03:04 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:13.082 13:03:04 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.082 13:03:04 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:13.082 13:03:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.082 13:03:05 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:13.082 13:03:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.082 13:03:05 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.082 13:03:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.341 [2024-07-15 13:03:05.239175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:13.341 [2024-07-15 13:03:05.239341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdedc20 (107): Transport endpoint is not connected 00:33:13.341 [2024-07-15 13:03:05.240331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdedc20 (9): Bad file descriptor 00:33:13.341 [2024-07-15 13:03:05.241331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:13.341 [2024-07-15 13:03:05.241347] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:13.341 [2024-07-15 13:03:05.241360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:13.341 request: 00:33:13.341 { 00:33:13.341 "name": "nvme0", 00:33:13.341 "trtype": "tcp", 00:33:13.341 "traddr": "127.0.0.1", 00:33:13.341 "adrfam": "ipv4", 00:33:13.341 "trsvcid": "4420", 00:33:13.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.341 "prchk_reftag": false, 00:33:13.341 "prchk_guard": false, 00:33:13.341 "hdgst": false, 00:33:13.341 "ddgst": false, 00:33:13.341 "psk": ":spdk-test:key1", 00:33:13.341 "method": "bdev_nvme_attach_controller", 00:33:13.341 "req_id": 1 00:33:13.341 } 00:33:13.341 Got JSON-RPC error response 00:33:13.341 response: 00:33:13.341 { 00:33:13.341 "code": -5, 00:33:13.341 "message": "Input/output error" 00:33:13.341 } 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@33 -- # sn=959797523 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 959797523 00:33:13.341 1 links removed 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@33 -- # sn=472426901 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 472426901 00:33:13.341 1 links removed 00:33:13.341 13:03:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4163574 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 4163574 ']' 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 4163574 00:33:13.341 13:03:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4163574 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4163574' 00:33:13.599 killing process with pid 4163574 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 4163574 00:33:13.599 Received shutdown signal, test time was about 1.000000 seconds 00:33:13.599 00:33:13.599 Latency(us) 00:33:13.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.599 =================================================================================================================== 00:33:13.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:13.599 13:03:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 4163574 00:33:13.857 13:03:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4163537 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 4163537 ']' 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 4163537 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4163537 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4163537' 00:33:13.857 killing process with pid 4163537 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 4163537 00:33:13.857 13:03:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 4163537 00:33:14.116 00:33:14.116 real 0m6.087s 00:33:14.116 user 0m12.423s 00:33:14.116 sys 0m1.648s 00:33:14.116 13:03:05 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:14.116 13:03:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:14.116 ************************************ 00:33:14.116 END TEST keyring_linux 00:33:14.116 ************************************ 00:33:14.116 13:03:05 -- common/autotest_common.sh@1142 -- # return 0 00:33:14.116 13:03:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:14.116 13:03:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:14.116 13:03:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:14.116 13:03:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:14.116 13:03:05 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:14.116 13:03:05 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:14.116 13:03:05 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:14.116 13:03:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:14.116 13:03:05 -- common/autotest_common.sh@10 -- # set +x 00:33:14.116 13:03:05 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:14.116 13:03:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:14.116 13:03:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:14.116 13:03:05 -- common/autotest_common.sh@10 -- # set +x 00:33:20.682 INFO: APP EXITING 00:33:20.682 INFO: killing all VMs 00:33:20.682 INFO: killing vhost app 00:33:20.682 WARN: no vhost pid file found 00:33:20.683 INFO: EXIT DONE 00:33:22.599 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:33:22.599 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:22.599 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:22.858 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:22.858 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:22.858 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:22.858 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:25.470 Cleaning 00:33:25.470 Removing: /var/run/dpdk/spdk0/config 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:25.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:25.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:25.728 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:25.729 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:25.729 Removing: /var/run/dpdk/spdk1/config 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:25.729 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:25.729 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:25.729 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:25.729 Removing: /var/run/dpdk/spdk2/config 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:25.729 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:25.729 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:25.729 Removing: /var/run/dpdk/spdk3/config 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:25.729 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:25.729 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:25.729 Removing: /var/run/dpdk/spdk4/config 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:25.729 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:25.729 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:25.729 Removing: /dev/shm/bdev_svc_trace.1 00:33:25.729 Removing: /dev/shm/nvmf_trace.0 00:33:25.729 Removing: /dev/shm/spdk_tgt_trace.pid3729932 00:33:25.729 Removing: /var/run/dpdk/spdk0 00:33:25.729 Removing: /var/run/dpdk/spdk1 00:33:25.729 Removing: /var/run/dpdk/spdk2 00:33:25.729 Removing: /var/run/dpdk/spdk3 00:33:25.729 Removing: /var/run/dpdk/spdk4 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3727515 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3728733 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3729932 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3730632 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3731701 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3731828 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3732849 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3733096 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3733340 00:33:25.729 Removing: /var/run/dpdk/spdk_pid3735289 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3736606 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3736923 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3737307 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3737572 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3738027 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3738254 00:33:25.988 Removing: /var/run/dpdk/spdk_pid3738479 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3738784 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3739866 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3743334 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3743794 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3744086 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3744094 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3744653 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3744915 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3745398 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3745479 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3745773 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3745783 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3746071 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3746085 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3746721 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3746987 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3747310 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3747661 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3747883 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3747951 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3748309 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3748644 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3748922 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3749207 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3749484 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3749960 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3750434 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3750715 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3751001 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3751280 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3751563 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3751845 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3752128 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3752407 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3752690 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3752981 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3753269 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3753595 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3753881 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3754194 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3754419 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3754754 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3758602 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3806053 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3810814 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3821924 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3827525 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3831971 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3832579 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3839349 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3846144 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3846146 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3847070 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3847985 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3849151 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3849677 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3849755 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3850384 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3850589 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3850598 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3851647 00:33:25.989 Removing: /var/run/dpdk/spdk_pid3852685 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3853496 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3854262 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3854264 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3854533 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3855920 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3857032 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3865970 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3866255 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3870804 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3876979 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3880391 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3891933 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3901985 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3903819 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3904870 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3923130 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3927048 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3965189 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3970796 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3972598 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3974462 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3974737 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3975008 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3975349 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3976173 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3978722 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3980097 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3980869 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3983284 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3984023 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3984955 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3989272 00:33:26.247 Removing: /var/run/dpdk/spdk_pid3999886 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4004366 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4010893 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4012364 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4013863 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4018428 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4022780 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4031000 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4031005 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4035936 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4036106 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4036347 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4036883 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4036892 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4041692 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4042345 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4046995 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4049881 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4055831 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4061467 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4071931 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4079993 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4079995 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4099737 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4100777 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4101390 00:33:26.247 Removing: /var/run/dpdk/spdk_pid4102119 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4102965 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4104013 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4104814 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4105600 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4109906 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4110261 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4116648 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4116849 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4119352 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4128376 00:33:26.248 Removing: /var/run/dpdk/spdk_pid4128382 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4133567 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4135678 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4137802 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4139110 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4141242 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4142561 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4151740 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4152262 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4152831 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4155243 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4155771 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4156304 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4160395 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4160423 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4162907 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4163537 00:33:26.506 Removing: /var/run/dpdk/spdk_pid4163574 00:33:26.506 Clean 00:33:26.506 13:03:18 -- common/autotest_common.sh@1451 -- # return 0 00:33:26.506 13:03:18 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:26.506 13:03:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:26.506 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:33:26.506 13:03:18 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:26.506 13:03:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:26.506 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:33:26.506 13:03:18 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:26.506 13:03:18 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:26.506 13:03:18 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:26.506 13:03:18 -- spdk/autotest.sh@391 -- # hash lcov 00:33:26.506 13:03:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:26.506 13:03:18 -- spdk/autotest.sh@393 -- # hostname 00:33:26.506 13:03:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:26.765 geninfo: WARNING: invalid characters removed from testname! 00:33:58.847 13:03:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.223 13:03:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:02.752 13:03:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:06.037 13:03:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:08.569 13:04:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:11.851 13:04:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:14.384 13:04:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:14.643 13:04:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.643 13:04:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:14.643 13:04:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.643 13:04:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.643 13:04:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 13:04:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 13:04:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 13:04:06 -- paths/export.sh@5 -- $ export PATH 00:34:14.643 13:04:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.643 13:04:06 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:14.643 13:04:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:14.643 13:04:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721041446.XXXXXX 00:34:14.643 13:04:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721041446.DnH32u 00:34:14.643 13:04:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:14.643 13:04:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:14.643 13:04:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:14.643 13:04:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:14.643 13:04:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:14.643 13:04:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:14.643 13:04:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:14.643 13:04:06 -- common/autotest_common.sh@10 -- $ set +x 00:34:14.643 13:04:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:14.643 13:04:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:14.643 13:04:06 -- pm/common@17 -- $ local monitor 00:34:14.643 13:04:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:14.643 13:04:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:14.643 13:04:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:14.643 13:04:06 -- pm/common@21 -- $ date +%s 00:34:14.643 13:04:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:14.643 13:04:06 -- pm/common@21 -- $ date +%s 00:34:14.643 13:04:06 -- pm/common@25 -- $ sleep 1 00:34:14.643 13:04:06 -- pm/common@21 -- $ date +%s 00:34:14.643 13:04:06 -- pm/common@21 -- $ date +%s 00:34:14.643 13:04:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041446 00:34:14.643 13:04:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041446 00:34:14.643 13:04:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041446 00:34:14.643 13:04:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041446 00:34:14.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041446_collect-vmstat.pm.log 00:34:14.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041446_collect-cpu-load.pm.log 00:34:14.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041446_collect-cpu-temp.pm.log 00:34:14.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041446_collect-bmc-pm.bmc.pm.log 00:34:15.578 13:04:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:15.578 13:04:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:15.578 13:04:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:15.578 13:04:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:15.578 13:04:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:15.578 13:04:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:15.578 13:04:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:15.578 13:04:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:15.578 13:04:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:15.578 13:04:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:15.578 13:04:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:15.578 13:04:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:15.578 13:04:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:15.578 13:04:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.578 13:04:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:15.578 13:04:07 -- pm/common@44 -- $ pid=4175455 00:34:15.578 13:04:07 -- pm/common@50 -- $ kill -TERM 4175455 00:34:15.578 13:04:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.578 13:04:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:15.578 13:04:07 -- pm/common@44 -- $ pid=4175456 00:34:15.578 13:04:07 -- pm/common@50 -- $ kill -TERM 4175456 00:34:15.578 13:04:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.578 13:04:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:15.578 13:04:07 -- pm/common@44 -- $ pid=4175458 00:34:15.578 13:04:07 -- pm/common@50 -- $ kill -TERM 4175458 00:34:15.578 13:04:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.578 13:04:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:15.578 13:04:07 -- pm/common@44 -- $ pid=4175481 00:34:15.578 13:04:07 -- pm/common@50 -- $ sudo -E kill -TERM 4175481 00:34:15.578 + [[ -n 3615541 ]] 00:34:15.578 + sudo kill 3615541 00:34:15.587 [Pipeline] } 00:34:15.607 [Pipeline] // stage 00:34:15.612 [Pipeline] } 00:34:15.633 [Pipeline] // timeout 00:34:15.639 [Pipeline] } 00:34:15.661 [Pipeline] // catchError 00:34:15.667 [Pipeline] } 00:34:15.687 [Pipeline] // wrap 00:34:15.693 [Pipeline] } 00:34:15.710 [Pipeline] // catchError 00:34:15.721 [Pipeline] stage 00:34:15.723 [Pipeline] { (Epilogue) 00:34:15.741 [Pipeline] catchError 00:34:15.743 [Pipeline] { 00:34:15.758 [Pipeline] echo 00:34:15.759 Cleanup processes 00:34:15.764 [Pipeline] sh 00:34:16.044 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.044 4175565 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:16.044 4175901 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.057 [Pipeline] sh 00:34:16.337 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.337 ++ grep -v 'sudo pgrep' 00:34:16.337 ++ awk '{print $1}' 00:34:16.337 + sudo kill -9 4175565 00:34:16.349 [Pipeline] sh 00:34:16.629 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:31.566 [Pipeline] sh 00:34:31.847 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:31.847 Artifacts sizes are good 00:34:31.862 [Pipeline] archiveArtifacts 00:34:31.869 Archiving artifacts 00:34:32.018 [Pipeline] sh 00:34:32.296 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:32.309 [Pipeline] cleanWs 00:34:32.319 [WS-CLEANUP] Deleting project workspace... 00:34:32.319 [WS-CLEANUP] Deferred wipeout is used... 00:34:32.326 [WS-CLEANUP] done 00:34:32.328 [Pipeline] } 00:34:32.354 [Pipeline] // catchError 00:34:32.368 [Pipeline] sh 00:34:32.646 + logger -p user.info -t JENKINS-CI 00:34:32.654 [Pipeline] } 00:34:32.669 [Pipeline] // stage 00:34:32.673 [Pipeline] } 00:34:32.688 [Pipeline] // node 00:34:32.692 [Pipeline] End of Pipeline 00:34:32.832 Finished: SUCCESS